Sample records for imaging features predictive

  1. Radiomic biomarkers from PET/CT multi-modality fusion images for the prediction of immunotherapy response in advanced non-small cell lung cancer patients

    NASA Astrophysics Data System (ADS)

    Mu, Wei; Qi, Jin; Lu, Hong; Schabath, Matthew; Balagurunathan, Yoganand; Tunali, Ilke; Gillies, Robert James

    2018-02-01

    Purpose: Investigate the ability of using complementary information provided by the fusion of PET/CT images to predict immunotherapy response in non-small cell lung cancer (NSCLC) patients. Materials and methods: We collected 64 patients diagnosed with primary NSCLC treated with anti PD-1 checkpoint blockade. Using PET/CT images, fused images were created following multiple methodologies, resulting in up to 7 different images for the tumor region. Quantitative image features were extracted from the primary image (PET/CT) and the fused images, which included 195 from primary images and 1235 features from the fusion images. Three clinical characteristics were also analyzed. We then used support vector machine (SVM) classification models to identify discriminant features that predict immunotherapy response at baseline. Results: A SVM built with 87 fusion features and 13 primary PET/CT features on validation dataset had an accuracy and area under the ROC curve (AUROC) of 87.5% and 0.82, respectively, compared to a model built with 113 original PET/CT features on validation dataset 78.12% and 0.68. Conclusion: The fusion features shows better ability to predict immunotherapy response prediction compared to individual image features.

  2. Feature maps driven no-reference image quality prediction of authentically distorted images

    NASA Astrophysics Data System (ADS)

    Ghadiyaram, Deepti; Bovik, Alan C.

    2015-03-01

    Current blind image quality prediction models rely on benchmark databases comprised of singly and synthetically distorted images, thereby learning image features that are only adequate to predict human perceived visual quality on such inauthentic distortions. However, real world images often contain complex mixtures of multiple distortions. Rather than a) discounting the effect of these mixtures of distortions on an image's perceptual quality and considering only the dominant distortion or b) using features that are only proven to be efficient for singly distorted images, we deeply study the natural scene statistics of authentically distorted images, in different color spaces and transform domains. We propose a feature-maps-driven statistical approach which avoids any latent assumptions about the type of distortion(s) contained in an image, and focuses instead on modeling the remarkable consistencies in the scene statistics of real world images in the absence of distortions. We design a deep belief network that takes model-based statistical image features derived from a very large database of authentically distorted images as input and discovers good feature representations by generalizing over different distortion types, mixtures, and severities, which are later used to learn a regressor for quality prediction. We demonstrate the remarkable competence of our features for improving automatic perceptual quality prediction on a benchmark database and on the newly designed LIVE Authentic Image Quality Challenge Database and show that our approach of combining robust statistical features and the deep belief network dramatically outperforms the state-of-the-art.

  3. Perceptual quality prediction on authentically distorted images using a bag of features approach

    PubMed Central

    Ghadiyaram, Deepti; Bovik, Alan C.

    2017-01-01

    Current top-performing blind perceptual image quality prediction models are generally trained on legacy databases of human quality opinion scores on synthetically distorted images. Therefore, they learn image features that effectively predict human visual quality judgments of inauthentic and usually isolated (single) distortions. However, real-world images usually contain complex composite mixtures of multiple distortions. We study the perceptually relevant natural scene statistics of such authentically distorted images in different color spaces and transform domains. We propose a “bag of feature maps” approach that avoids assumptions about the type of distortion(s) contained in an image and instead focuses on capturing consistencies—or departures therefrom—of the statistics of real-world images. Using a large database of authentically distorted images, human opinions of them, and bags of features computed on them, we train a regressor to conduct image quality prediction. We demonstrate the competence of the features toward improving automatic perceptual quality prediction by testing a learned algorithm using them on a benchmark legacy database as well as on a newly introduced distortion-realistic resource called the LIVE In the Wild Image Quality Challenge Database. We extensively evaluate the perceptual quality prediction model and algorithm and show that it is able to achieve good-quality prediction power that is better than other leading models. PMID:28129417

  4. The value of nodal information in predicting lung cancer relapse using 4DPET/4DCT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Heyse, E-mail: heyse.li@mail.utoronto.ca; Becker, Nathan; Raman, Srinivas

    2015-08-15

    Purpose: There is evidence that computed tomography (CT) and positron emission tomography (PET) imaging metrics are prognostic and predictive in nonsmall cell lung cancer (NSCLC) treatment outcomes. However, few studies have explored the use of standardized uptake value (SUV)-based image features of nodal regions as predictive features. The authors investigated and compared the use of tumor and node image features extracted from the radiotherapy target volumes to predict relapse in a cohort of NSCLC patients undergoing chemoradiation treatment. Methods: A prospective cohort of 25 patients with locally advanced NSCLC underwent 4DPET/4DCT imaging for radiation planning. Thirty-seven image features were derivedmore » from the CT-defined volumes and SUVs of the PET image from both the tumor and nodal target regions. The machine learning methods of logistic regression and repeated stratified five-fold cross-validation (CV) were used to predict local and overall relapses in 2 yr. The authors used well-known feature selection methods (Spearman’s rank correlation, recursive feature elimination) within each fold of CV. Classifiers were ranked on their Matthew’s correlation coefficient (MCC) after CV. Area under the curve, sensitivity, and specificity values are also presented. Results: For predicting local relapse, the best classifier found had a mean MCC of 0.07 and was composed of eight tumor features. For predicting overall relapse, the best classifier found had a mean MCC of 0.29 and was composed of a single feature: the volume greater than 0.5 times the maximum SUV (N). Conclusions: The best classifier for predicting local relapse had only tumor features. In contrast, the best classifier for predicting overall relapse included a node feature. Overall, the methods showed that nodes add value in predicting overall relapse but not local relapse.« less

  5. Prediction of troponin-T degradation using color image texture features in 10d aged beef longissimus steaks.

    PubMed

    Sun, X; Chen, K J; Berg, E P; Newman, D J; Schwartz, C A; Keller, W L; Maddock Carlin, K R

    2014-02-01

    The objective was to use digital color image texture features to predict troponin-T degradation in beef. Image texture features, including 88 gray level co-occurrence texture features, 81 two-dimension fast Fourier transformation texture features, and 48 Gabor wavelet filter texture features, were extracted from color images of beef strip steaks (longissimus dorsi, n = 102) aged for 10d obtained using a digital camera and additional lighting. Steaks were designated degraded or not-degraded based on troponin-T degradation determined on d 3 and d 10 postmortem by immunoblotting. Statistical analysis (STEPWISE regression model) and artificial neural network (support vector machine model, SVM) methods were designed to classify protein degradation. The d 3 and d 10 STEPWISE models were 94% and 86% accurate, respectively, while the d 3 and d 10 SVM models were 63% and 71%, respectively, in predicting protein degradation in aged meat. STEPWISE and SVM models based on image texture features show potential to predict troponin-T degradation in meat. © 2013.

  6. Predicting diagnostic error in Radiology via eye-tracking and image analytics: Application in mammography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Voisin, Sophie; Pinto, Frank M; Morin-Ducote, Garnetta

    2013-01-01

    Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels. Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from 4 Radiology residents and 2 breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADsmore » images features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated. Results: Diagnostic error can be predicted reliably by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model (AUC=0.79). Personalized user modeling was far more accurate for the more experienced readers (average AUC of 0.837 0.029) than for the less experienced ones (average AUC of 0.667 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features. Conclusions: Diagnostic errors in mammography can be predicted reliably by leveraging the radiologists gaze behavior and image content.« less

  7. Local-search based prediction of medical image registration error

    NASA Astrophysics Data System (ADS)

    Saygili, Görkem

    2018-03-01

    Medical image registration is a crucial task in many different medical imaging applications. Hence, considerable amount of work has been published recently that aim to predict the error in a registration without any human effort. If provided, these error predictions can be used as a feedback to the registration algorithm to further improve its performance. Recent methods generally start with extracting image-based and deformation-based features, then apply feature pooling and finally train a Random Forest (RF) regressor to predict the real registration error. Image-based features can be calculated after applying a single registration but provide limited accuracy whereas deformation-based features such as variation of deformation vector field may require up to 20 registrations which is a considerably high time-consuming task. This paper proposes to use extracted features from a local search algorithm as image-based features to estimate the error of a registration. The proposed method comprises a local search algorithm to find corresponding voxels between registered image pairs and based on the amount of shifts and stereo confidence measures, it predicts the amount of registration error in millimetres densely using a RF regressor. Compared to other algorithms in the literature, the proposed algorithm does not require multiple registrations, can be efficiently implemented on a Graphical Processing Unit (GPU) and can still provide highly accurate error predictions in existence of large registration error. Experimental results with real registrations on a public dataset indicate a substantially high accuracy achieved by using features from the local search algorithm.

  8. WE-E-17A-02: Predictive Modeling of Outcome Following SABR for NSCLC Based On Radiomics of FDG-PET Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, R; Aguilera, T; Shultz, D

    2014-06-15

    Purpose: This study aims to develop predictive models of patient outcome by extracting advanced imaging features (i.e., Radiomics) from FDG-PET images. Methods: We acquired pre-treatment PET scans for 51 stage I NSCLC patients treated with SABR. We calculated 139 quantitative features from each patient PET image, including 5 morphological features, 8 statistical features, 27 texture features, and 100 features from the intensity-volume histogram. Based on the imaging features, we aim to distinguish between 2 risk groups of patients: those with regional failure or distant metastasis versus those without. We investigated 3 pattern classification algorithms: linear discriminant analysis (LDA), naive Bayesmore » (NB), and logistic regression (LR). To avoid the curse of dimensionality, we performed feature selection by first removing redundant features and then applying sequential forward selection using the wrapper approach. To evaluate the predictive performance, we performed 10-fold cross validation with 1000 random splits of the data and calculated the area under the ROC curve (AUC). Results: Feature selection identified 2 texture features (homogeneity and/or wavelet decompositions) for NB and LR, while for LDA SUVmax and one texture feature (correlation) were identified. All 3 classifiers achieved statistically significant improvements over conventional PET imaging metrics such as tumor volume (AUC = 0.668) and SUVmax (AUC = 0.737). Overall, NB achieved the best predictive performance (AUC = 0.806). This also compares favorably with MTV using the best threshold at an SUV of 11.6 (AUC = 0.746). At a sensitivity of 80%, NB achieved 69% specificity, while SUVmax and tumor volume only had 36% and 47% specificity. Conclusion: Through a systematic analysis of advanced PET imaging features, we are able to build models with improved predictive value over conventional imaging metrics. If validated in a large independent cohort, the proposed techniques could potentially aid in identifying patients who might benefit from adjuvant therapy.« less

  9. Computer-aided global breast MR image feature analysis for prediction of tumor response to chemotherapy: performance assessment

    NASA Astrophysics Data System (ADS)

    Aghaei, Faranak; Tan, Maxine; Hollingsworth, Alan B.; Zheng, Bin; Cheng, Samuel

    2016-03-01

    Dynamic contrast-enhanced breast magnetic resonance imaging (DCE-MRI) has been used increasingly in breast cancer diagnosis and assessment of cancer treatment efficacy. In this study, we applied a computer-aided detection (CAD) scheme to automatically segment breast regions depicting on MR images and used the kinetic image features computed from the global breast MR images acquired before neoadjuvant chemotherapy to build a new quantitative model to predict response of the breast cancer patients to the chemotherapy. To assess performance and robustness of this new prediction model, an image dataset involving breast MR images acquired from 151 cancer patients before undergoing neoadjuvant chemotherapy was retrospectively assembled and used. Among them, 63 patients had "complete response" (CR) to chemotherapy in which the enhanced contrast levels inside the tumor volume (pre-treatment) was reduced to the level as the normal enhanced background parenchymal tissues (post-treatment), while 88 patients had "partially response" (PR) in which the high contrast enhancement remain in the tumor regions after treatment. We performed the studies to analyze the correlation among the 22 global kinetic image features and then select a set of 4 optimal features. Applying an artificial neural network trained with the fusion of these 4 kinetic image features, the prediction model yielded an area under ROC curve (AUC) of 0.83+/-0.04. This study demonstrated that by avoiding tumor segmentation, which is often difficult and unreliable, fusion of kinetic image features computed from global breast MR images without tumor segmentation can also generate a useful clinical marker in predicting efficacy of chemotherapy.

  10. Pseudo CT estimation from MRI using patch-based random forest

    NASA Astrophysics Data System (ADS)

    Yang, Xiaofeng; Lei, Yang; Shu, Hui-Kuo; Rossi, Peter; Mao, Hui; Shim, Hyunsuk; Curran, Walter J.; Liu, Tian

    2017-02-01

    Recently, MR simulators gain popularity because of unnecessary radiation exposure of CT simulators being used in radiation therapy planning. We propose a method for pseudo CT estimation from MR images based on a patch-based random forest. Patient-specific anatomical features are extracted from the aligned training images and adopted as signatures for each voxel. The most robust and informative features are identified using feature selection to train the random forest. The well-trained random forest is used to predict the pseudo CT of a new patient. This prediction technique was tested with human brain images and the prediction accuracy was assessed using the original CT images. Peak signal-to-noise ratio (PSNR) and feature similarity (FSIM) indexes were used to quantify the differences between the pseudo and original CT images. The experimental results showed the proposed method could accurately generate pseudo CT images from MR images. In summary, we have developed a new pseudo CT prediction method based on patch-based random forest, demonstrated its clinical feasibility, and validated its prediction accuracy. This pseudo CT prediction technique could be a useful tool for MRI-based radiation treatment planning and attenuation correction in a PET/MRI scanner.

  11. Assessing the performance of quantitative image features on early stage prediction of treatment effectiveness for ovary cancer patients: a preliminary investigation

    NASA Astrophysics Data System (ADS)

    Zargari, Abolfazl; Du, Yue; Thai, Theresa C.; Gunderson, Camille C.; Moore, Kathleen; Mannel, Robert S.; Liu, Hong; Zheng, Bin; Qiu, Yuchen

    2018-02-01

    The objective of this study is to investigate the performance of global and local features to better estimate the characteristics of highly heterogeneous metastatic tumours, for accurately predicting the treatment effectiveness of the advanced stage ovarian cancer patients. In order to achieve this , a quantitative image analysis scheme was developed to estimate a total of 103 features from three different groups including shape and density, Wavelet, and Gray Level Difference Method (GLDM) features. Shape and density features are global features, which are directly applied on the entire target image; wavelet and GLDM features are local features, which are applied on the divided blocks of the target image. To assess the performance, the new scheme was applied on a retrospective dataset containing 120 recurrent and high grade ovary cancer patients. The results indicate that the three best performed features are skewness, root-mean-square (rms) and mean of local GLDM texture, indicating the importance of integrating local features. In addition, the averaged predicting performance are comparable among the three different categories. This investigation concluded that the local features contains at least as copious tumour heterogeneity information as the global features, which may be meaningful on improving the predicting performance of the quantitative image markers for the diagnosis and prognosis of ovary cancer patients.

  12. Predicting diagnostic error in radiology via eye-tracking and image analytics: Preliminary investigation in mammography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Voisin, Sophie; Tourassi, Georgia D.; Pinto, Frank

    2013-10-15

    Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists’ gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels.Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from four Radiology residents and two breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADS imagesmore » features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated.Results: Machine learning can be used to predict diagnostic error by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model [area under the ROC curve (AUC) = 0.792 ± 0.030]. Personalized user modeling was far more accurate for the more experienced readers (AUC = 0.837 ± 0.029) than for the less experienced ones (AUC = 0.667 ± 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features.Conclusions: Diagnostic errors in mammography can be predicted to a good extent by leveraging the radiologists’ gaze behavior and image content.« less

  13. SU-E-J-260: Quantitative Image Feature Analysis of Multiphase Liver CT for Hepatocellular Carcinoma (HCC) in Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, W; Wang, J; Lu, W

    Purpose: To identify the effective quantitative image features (radiomics features) for prediction of response, survival, recurrence and metastasis of hepatocellular carcinoma (HCC) in radiotherapy. Methods: Multiphase contrast enhanced liver CT images were acquired in 16 patients with HCC on pre and post radiation therapy (RT). In this study, arterial phase CT images were selected to analyze the effectiveness of image features for the prediction of treatment outcome of HCC to RT. Response evaluated by RECIST criteria, survival, local recurrence (LR), distant metastasis (DM) and liver metastasis (LM) were examined. A radiation oncologist manually delineated the tumor and normal liver onmore » pre and post CT scans, respectively. Quantitative image features were extracted to characterize the intensity distribution (n=8), spatial patterns (texture, n=36), and shape (n=16) of the tumor and liver, respectively. Moreover, differences between pre and post image features were calculated (n=120). A total of 360 features were extracted and then analyzed by unpaired student’s t-test to rank the effectiveness of features for the prediction of response. Results: The five most effective features were selected for prediction of each outcome. Significant predictors for tumor response and survival are changes in tumor shape (Second Major Axes Length, p= 0.002; Eccentricity, p=0.0002), for LR, liver texture (Standard Deviation (SD) of High Grey Level Run Emphasis and SD of Entropy, both p=0.005) on pre and post CT images, for DM, tumor texture (SD of Entropy, p=0.01) on pre CT image and for LM, liver (Mean of Cluster Shade, p=0.004) and tumor texture (SD of Entropy, p=0.006) on pre CT image. Intensity distribution features were not significant (p>0.09). Conclusion: Quantitative CT image features were found to be potential predictors of the five endpoints of HCC in RT. This work was supported in part by the National Cancer Institute Grant R01CA172638.« less

  14. Feature Selection Methods for Zero-Shot Learning of Neural Activity.

    PubMed

    Caceres, Carlos A; Roos, Matthew J; Rupp, Kyle M; Milsap, Griffin; Crone, Nathan E; Wolmetz, Michael E; Ratto, Christopher R

    2017-01-01

    Dimensionality poses a serious challenge when making predictions from human neuroimaging data. Across imaging modalities, large pools of potential neural features (e.g., responses from particular voxels, electrodes, and temporal windows) have to be related to typically limited sets of stimuli and samples. In recent years, zero-shot prediction models have been introduced for mapping between neural signals and semantic attributes, which allows for classification of stimulus classes not explicitly included in the training set. While choices about feature selection can have a substantial impact when closed-set accuracy, open-set robustness, and runtime are competing design objectives, no systematic study of feature selection for these models has been reported. Instead, a relatively straightforward feature stability approach has been adopted and successfully applied across models and imaging modalities. To characterize the tradeoffs in feature selection for zero-shot learning, we compared correlation-based stability to several other feature selection techniques on comparable data sets from two distinct imaging modalities: functional Magnetic Resonance Imaging and Electrocorticography. While most of the feature selection methods resulted in similar zero-shot prediction accuracies and spatial/spectral patterns of selected features, there was one exception; A novel feature/attribute correlation approach was able to achieve those accuracies with far fewer features, suggesting the potential for simpler prediction models that yield high zero-shot classification accuracy.

  15. TU-D-207B-01: A Prediction Model for Distinguishing Radiation Necrosis From Tumor Progression After Gamma Knife Radiosurgery Based On Radiomics Features From MR Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Z; MD Anderson Cancer Center, Houston, TX; Ho, A

    Purpose: To develop and validate a prediction model using radiomics features extracted from MR images to distinguish radiation necrosis from tumor progression for brain metastases treated with Gamma knife radiosurgery. Methods: The images used to develop the model were T1 post-contrast MR scans from 71 patients who had had pathologic confirmation of necrosis or progression; 1 lesion was identified per patient (17 necrosis and 54 progression). Radiomics features were extracted from 2 images at 2 time points per patient, both obtained prior to resection. Each lesion was manually contoured on each image, and 282 radiomics features were calculated for eachmore » lesion. The correlation for each radiomics feature between two time points was calculated within each group to identify a subset of features with distinct values between two groups. The delta of this subset of radiomics features, characterizing changes from the earlier time to the later one, was included as a covariate to build a prediction model using support vector machines with a cubic polynomial kernel function. The model was evaluated with a 10-fold cross-validation. Results: Forty radiomics features were selected based on consistent correlation values of approximately 0 for the necrosis group and >0.2 for the progression group. In performing the 10-fold cross-validation, we narrowed this number down to 11 delta radiomics features for the model. This 11-delta-feature model showed an overall prediction accuracy of 83.1%, with a true positive rate of 58.8% in predicting necrosis and 90.7% for predicting tumor progression. The area under the curve for the prediction model was 0.79. Conclusion: These delta radiomics features extracted from MR scans showed potential for distinguishing radiation necrosis from tumor progression. This tool may be a useful, noninvasive means of determining the status of an enlarging lesion after radiosurgery, aiding decision-making regarding surgical resection versus conservative medical management.« less

  16. Prediction of cervical cancer recurrence using textural features extracted from 18F-FDG PET images acquired with different scanners.

    PubMed

    Reuzé, Sylvain; Orlhac, Fanny; Chargari, Cyrus; Nioche, Christophe; Limkin, Elaine; Riet, François; Escande, Alexandre; Haie-Meder, Christine; Dercle, Laurent; Gouy, Sébastien; Buvat, Irène; Deutsch, Eric; Robert, Charlotte

    2017-06-27

    To identify an imaging signature predicting local recurrence for locally advanced cervical cancer (LACC) treated by chemoradiation and brachytherapy from baseline 18F-FDG PET images, and to evaluate the possibility of gathering images from two different PET scanners in a radiomic study. 118 patients were included retrospectively. Two groups (G1, G2) were defined according to the PET scanner used for image acquisition. Eleven radiomic features were extracted from delineated cervical tumors to evaluate: (i) the predictive value of features for local recurrence of LACC, (ii) their reproducibility as a function of the scanner within a hepatic reference volume, (iii) the impact of voxel size on feature values. Eight features were statistically significant predictors of local recurrence in G1 (p < 0.05). The multivariate signature trained in G2 was validated in G1 (AUC=0.76, p<0.001) and identified local recurrence more accurately than SUVmax (p=0.022). Four features were significantly different between G1 and G2 in the liver. Spatial resampling was not sufficient to explain the stratification effect. This study showed that radiomic features could predict local recurrence of LACC better than SUVmax. Further investigation is needed before applying a model designed using data from one PET scanner to another.

  17. Image analysis and machine learning in digital pathology: Challenges and opportunities.

    PubMed

    Madabhushi, Anant; Lee, George

    2016-10-01

    With the rise in whole slide scanner technology, large numbers of tissue slides are being scanned and represented and archived digitally. While digital pathology has substantial implications for telepathology, second opinions, and education there are also huge research opportunities in image computing with this new source of "big data". It is well known that there is fundamental prognostic data embedded in pathology images. The ability to mine "sub-visual" image features from digital pathology slide images, features that may not be visually discernible by a pathologist, offers the opportunity for better quantitative modeling of disease appearance and hence possibly improved prediction of disease aggressiveness and patient outcome. However the compelling opportunities in precision medicine offered by big digital pathology data come with their own set of computational challenges. Image analysis and computer assisted detection and diagnosis tools previously developed in the context of radiographic images are woefully inadequate to deal with the data density in high resolution digitized whole slide images. Additionally there has been recent substantial interest in combining and fusing radiologic imaging and proteomics and genomics based measurements with features extracted from digital pathology images for better prognostic prediction of disease aggressiveness and patient outcome. Again there is a paucity of powerful tools for combining disease specific features that manifest across multiple different length scales. The purpose of this review is to discuss developments in computational image analysis tools for predictive modeling of digital pathology images from a detection, segmentation, feature extraction, and tissue classification perspective. We discuss the emergence of new handcrafted feature approaches for improved predictive modeling of tissue appearance and also review the emergence of deep learning schemes for both object detection and tissue classification. We also briefly review some of the state of the art in fusion of radiology and pathology images and also combining digital pathology derived image measurements with molecular "omics" features for better predictive modeling. The review ends with a brief discussion of some of the technical and computational challenges to be overcome and reflects on future opportunities for the quantitation of histopathology. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Evaluation of image features and classification methods for Barrett's cancer detection using VLE imaging

    NASA Astrophysics Data System (ADS)

    Klomp, Sander; van der Sommen, Fons; Swager, Anne-Fré; Zinger, Svitlana; Schoon, Erik J.; Curvers, Wouter L.; Bergman, Jacques J.; de With, Peter H. N.

    2017-03-01

    Volumetric Laser Endomicroscopy (VLE) is a promising technique for the detection of early neoplasia in Barrett's Esophagus (BE). VLE generates hundreds of high resolution, grayscale, cross-sectional images of the esophagus. However, at present, classifying these images is a time consuming and cumbersome effort performed by an expert using a clinical prediction model. This paper explores the feasibility of using computer vision techniques to accurately predict the presence of dysplastic tissue in VLE BE images. Our contribution is threefold. First, a benchmarking is performed for widely applied machine learning techniques and feature extraction methods. Second, three new features based on the clinical detection model are proposed, having superior classification accuracy and speed, compared to earlier work. Third, we evaluate automated parameter tuning by applying simple grid search and feature selection methods. The results are evaluated on a clinically validated dataset of 30 dysplastic and 30 non-dysplastic VLE images. Optimal classification accuracy is obtained by applying a support vector machine and using our modified Haralick features and optimal image cropping, obtaining an area under the receiver operating characteristic of 0.95 compared to the clinical prediction model at 0.81. Optimal execution time is achieved using a proposed mean and median feature, which is extracted at least factor 2.5 faster than alternative features with comparable performance.

  19. No-reference image quality assessment based on statistics of convolution feature maps

    NASA Astrophysics Data System (ADS)

    Lv, Xiaoxin; Qin, Min; Chen, Xiaohui; Wei, Guo

    2018-04-01

    We propose a Convolutional Feature Maps (CFM) driven approach to accurately predict image quality. Our motivation bases on the finding that the Nature Scene Statistic (NSS) features on convolution feature maps are significantly sensitive to distortion degree of an image. In our method, a Convolutional Neural Network (CNN) is trained to obtain kernels for generating CFM. We design a forward NSS layer which performs on CFM to better extract NSS features. The quality aware features derived from the output of NSS layer is effective to describe the distortion type and degree an image suffered. Finally, a Support Vector Regression (SVR) is employed in our No-Reference Image Quality Assessment (NR-IQA) model to predict a subjective quality score of a distorted image. Experiments conducted on two public databases demonstrate the promising performance of the proposed method is competitive to state of the art NR-IQA methods.

  20. Comprehensive Computational Pathological Image Analysis Predicts Lung Cancer Prognosis.

    PubMed

    Luo, Xin; Zang, Xiao; Yang, Lin; Huang, Junzhou; Liang, Faming; Rodriguez-Canales, Jaime; Wistuba, Ignacio I; Gazdar, Adi; Xie, Yang; Xiao, Guanghua

    2017-03-01

    Pathological examination of histopathological slides is a routine clinical procedure for lung cancer diagnosis and prognosis. Although the classification of lung cancer has been updated to become more specific, only a small subset of the total morphological features are taken into consideration. The vast majority of the detailed morphological features of tumor tissues, particularly tumor cells' surrounding microenvironment, are not fully analyzed. The heterogeneity of tumor cells and close interactions between tumor cells and their microenvironments are closely related to tumor development and progression. The goal of this study is to develop morphological feature-based prediction models for the prognosis of patients with lung cancer. We developed objective and quantitative computational approaches to analyze the morphological features of pathological images for patients with NSCLC. Tissue pathological images were analyzed for 523 patients with adenocarcinoma (ADC) and 511 patients with squamous cell carcinoma (SCC) from The Cancer Genome Atlas lung cancer cohorts. The features extracted from the pathological images were used to develop statistical models that predict patients' survival outcomes in ADC and SCC, respectively. We extracted 943 morphological features from pathological images of hematoxylin and eosin-stained tissue and identified morphological features that are significantly associated with prognosis in ADC and SCC, respectively. Statistical models based on these extracted features stratified NSCLC patients into high-risk and low-risk groups. The models were developed from training sets and validated in independent testing sets: a predicted high-risk group versus a predicted low-risk group (for patients with ADC: hazard ratio = 2.34, 95% confidence interval: 1.12-4.91, p = 0.024; for patients with SCC: hazard ratio = 2.22, 95% confidence interval: 1.15-4.27, p = 0.017) after adjustment for age, sex, smoking status, and pathologic tumor stage. The results suggest that the quantitative morphological features of tumor pathological images predict prognosis in patients with lung cancer. Copyright © 2016 International Association for the Study of Lung Cancer. Published by Elsevier Inc. All rights reserved.

  1. Feature Selection Methods for Zero-Shot Learning of Neural Activity

    PubMed Central

    Caceres, Carlos A.; Roos, Matthew J.; Rupp, Kyle M.; Milsap, Griffin; Crone, Nathan E.; Wolmetz, Michael E.; Ratto, Christopher R.

    2017-01-01

    Dimensionality poses a serious challenge when making predictions from human neuroimaging data. Across imaging modalities, large pools of potential neural features (e.g., responses from particular voxels, electrodes, and temporal windows) have to be related to typically limited sets of stimuli and samples. In recent years, zero-shot prediction models have been introduced for mapping between neural signals and semantic attributes, which allows for classification of stimulus classes not explicitly included in the training set. While choices about feature selection can have a substantial impact when closed-set accuracy, open-set robustness, and runtime are competing design objectives, no systematic study of feature selection for these models has been reported. Instead, a relatively straightforward feature stability approach has been adopted and successfully applied across models and imaging modalities. To characterize the tradeoffs in feature selection for zero-shot learning, we compared correlation-based stability to several other feature selection techniques on comparable data sets from two distinct imaging modalities: functional Magnetic Resonance Imaging and Electrocorticography. While most of the feature selection methods resulted in similar zero-shot prediction accuracies and spatial/spectral patterns of selected features, there was one exception; A novel feature/attribute correlation approach was able to achieve those accuracies with far fewer features, suggesting the potential for simpler prediction models that yield high zero-shot classification accuracy. PMID:28690513

  2. Real-time evaluation of polyphenol oxidase (PPO) activity in lychee pericarp based on weighted combination of spectral data and image features as determined by fuzzy neural network.

    PubMed

    Yang, Yi-Chao; Sun, Da-Wen; Wang, Nan-Nan; Xie, Anguo

    2015-07-01

    A novel method of using hyperspectral imaging technique with the weighted combination of spectral data and image features by fuzzy neural network (FNN) was proposed for real-time prediction of polyphenol oxidase (PPO) activity in lychee pericarp. Lychee images were obtained by a hyperspectral reflectance imaging system operating in the range of 400-1000nm. A support vector machine-recursive feature elimination (SVM-RFE) algorithm was applied to eliminating variables with no or little information for the prediction from all bands, resulting in a reduced set of optimal wavelengths. Spectral information at the optimal wavelengths and image color features were then used respectively to develop calibration models for the prediction of PPO in pericarp during storage, and the results of two models were compared. In order to improve the prediction accuracy, a decision strategy was developed based on weighted combination of spectral data and image features, in which the weights were determined by FNN for a better estimation of PPO activity. The results showed that the combined decision model was the best among all of the calibration models, with high R(2) values of 0.9117 and 0.9072 and low RMSEs of 0.45% and 0.459% for calibration and prediction, respectively. These results demonstrate that the proposed weighted combined decision method has great potential for improving model performance. The proposed technique could be used for a better prediction of other internal and external quality attributes of fruits. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Evaluation of a web based informatics system with data mining tools for predicting outcomes with quantitative imaging features in stroke rehabilitation clinical trials

    NASA Astrophysics Data System (ADS)

    Wang, Ximing; Kim, Bokkyu; Park, Ji Hoon; Wang, Erik; Forsyth, Sydney; Lim, Cody; Ravi, Ragini; Karibyan, Sarkis; Sanchez, Alexander; Liu, Brent

    2017-03-01

    Quantitative imaging biomarkers are used widely in clinical trials for tracking and evaluation of medical interventions. Previously, we have presented a web based informatics system utilizing quantitative imaging features for predicting outcomes in stroke rehabilitation clinical trials. The system integrates imaging features extraction tools and a web-based statistical analysis tool. The tools include a generalized linear mixed model(GLMM) that can investigate potential significance and correlation based on features extracted from clinical data and quantitative biomarkers. The imaging features extraction tools allow the user to collect imaging features and the GLMM module allows the user to select clinical data and imaging features such as stroke lesion characteristics from the database as regressors and regressands. This paper discusses the application scenario and evaluation results of the system in a stroke rehabilitation clinical trial. The system was utilized to manage clinical data and extract imaging biomarkers including stroke lesion volume, location and ventricle/brain ratio. The GLMM module was validated and the efficiency of data analysis was also evaluated.

  4. Imaging genetics approach to predict progression of Parkinson's diseases.

    PubMed

    Mansu Kim; Seong-Jin Son; Hyunjin Park

    2017-07-01

    Imaging genetics is a tool to extract genetic variants associated with both clinical phenotypes and imaging information. The approach can extract additional genetic variants compared to conventional approaches to better investigate various diseased conditions. Here, we applied imaging genetics to study Parkinson's disease (PD). We aimed to extract significant features derived from imaging genetics and neuroimaging. We built a regression model based on extracted significant features combining genetics and neuroimaging to better predict clinical scores of PD progression (i.e. MDS-UPDRS). Our model yielded high correlation (r = 0.697, p <; 0.001) and low root mean squared error (8.36) between predicted and actual MDS-UPDRS scores. Neuroimaging (from 123 I-Ioflupane SPECT) predictors of regression model were computed from independent component analysis approach. Genetic features were computed using image genetics approach based on identified neuroimaging features as intermediate phenotypes. Joint modeling of neuroimaging and genetics could provide complementary information and thus have the potential to provide further insight into the pathophysiology of PD. Our model included newly found neuroimaging features and genetic variants which need further investigation.

  5. MRI signal and texture features for the prediction of MCI to Alzheimer's disease progression

    NASA Astrophysics Data System (ADS)

    Martínez-Torteya, Antonio; Rodríguez-Rojas, Juan; Celaya-Padilla, José M.; Galván-Tejada, Jorge I.; Treviño, Victor; Tamez-Peña, José G.

    2014-03-01

    An early diagnosis of Alzheimer's disease (AD) confers many benefits. Several biomarkers from different information modalities have been proposed for the prediction of MCI to AD progression, where features extracted from MRI have played an important role. However, studies have focused almost exclusively in the morphological characteristics of the images. This study aims to determine whether features relating to the signal and texture of the image could add predictive power. Baseline clinical, biological and PET information, and MP-RAGE images for 62 subjects from the Alzheimer's Disease Neuroimaging Initiative were used in this study. Images were divided into 83 regions and 50 features were extracted from each one of these. A multimodal database was constructed, and a feature selection algorithm was used to obtain an accurate and small logistic regression model, which achieved a cross-validation accuracy of 0.96. These model included six features, five of them obtained from the MP-RAGE image, and one obtained from genotyping. A risk analysis divided the subjects into low-risk and high-risk groups according to a prognostic index, showing that both groups are statistically different (p-value of 2.04e-11). The results demonstrate that MRI features related to both signal and texture, add MCI to AD predictive power, and support the idea that multimodal biomarkers outperform single-modality biomarkers.

  6. Automatic brain MR image denoising based on texture feature-based artificial neural networks.

    PubMed

    Chang, Yu-Ning; Chang, Herng-Hua

    2015-01-01

    Noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing in brain magnetic resonance (MR) image analysis such as tissue classification, segmentation and registration. Accordingly, noise removal in brain MR images is important for a wide variety of subsequent processing applications. However, most existing denoising algorithms require laborious tuning of parameters that are often sensitive to specific image features and textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. In the present study, an artificial neural network associated with image texture feature analysis is proposed to establish a predictable parameter model and automate the denoising procedure. In the proposed approach, a total of 83 image attributes were extracted based on four categories: 1) Basic image statistics. 2) Gray-level co-occurrence matrix (GLCM). 3) Gray-level run-length matrix (GLRLM) and 4) Tamura texture features. To obtain the ranking of discrimination in these texture features, a paired-samples t-test was applied to each individual image feature computed in every image. Subsequently, the sequential forward selection (SFS) method was used to select the best texture features according to the ranking of discrimination. The selected optimal features were further incorporated into a back propagation neural network to establish a predictable parameter model. A wide variety of MR images with various scenarios were adopted to evaluate the performance of the proposed framework. Experimental results indicated that this new automation system accurately predicted the bilateral filtering parameters and effectively removed the noise in a number of MR images. Comparing to the manually tuned filtering process, our approach not only produced better denoised results but also saved significant processing time.

  7. TU-CD-BRB-01: Normal Lung CT Texture Features Improve Predictive Models for Radiation Pneumonitis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krafft, S; The University of Texas Graduate School of Biomedical Sciences, Houston, TX; Briere, T

    2015-06-15

    Purpose: Existing normal tissue complication probability (NTCP) models for radiation pneumonitis (RP) traditionally rely on dosimetric and clinical data but are limited in terms of performance and generalizability. Extraction of pre-treatment image features provides a potential new category of data that can improve NTCP models for RP. We consider quantitative measures of total lung CT intensity and texture in a framework for prediction of RP. Methods: Available clinical and dosimetric data was collected for 198 NSCLC patients treated with definitive radiotherapy. Intensity- and texture-based image features were extracted from the T50 phase of the 4D-CT acquired for treatment planning. Amore » total of 3888 features (15 clinical, 175 dosimetric, and 3698 image features) were gathered and considered candidate predictors for modeling of RP grade≥3. A baseline logistic regression model with mean lung dose (MLD) was first considered. Additionally, a least absolute shrinkage and selection operator (LASSO) logistic regression was applied to the set of clinical and dosimetric features, and subsequently to the full set of clinical, dosimetric, and image features. Model performance was assessed by comparing area under the curve (AUC). Results: A simple logistic fit of MLD was an inadequate model of the data (AUC∼0.5). Including clinical and dosimetric parameters within the framework of the LASSO resulted in improved performance (AUC=0.648). Analysis of the full cohort of clinical, dosimetric, and image features provided further and significant improvement in model performance (AUC=0.727). Conclusions: To achieve significant gains in predictive modeling of RP, new categories of data should be considered in addition to clinical and dosimetric features. We have successfully incorporated CT image features into a framework for modeling RP and have demonstrated improved predictive performance. Validation and further investigation of CT image features in the context of RP NTCP modeling is warranted. This work was supported by the Rosalie B. Hite Fellowship in Cancer research awarded to SPK.« less

  8. Applying a radiomics approach to predict prognosis of lung cancer patients

    NASA Astrophysics Data System (ADS)

    Emaminejad, Nastaran; Yan, Shiju; Wang, Yunzhi; Qian, Wei; Guan, Yubao; Zheng, Bin

    2016-03-01

    Radiomics is an emerging technology to decode tumor phenotype based on quantitative analysis of image features computed from radiographic images. In this study, we applied Radiomics concept to investigate the association among the CT image features of lung tumors, which are either quantitatively computed or subjectively rated by radiologists, and two genomic biomarkers namely, protein expression of the excision repair cross-complementing 1 (ERCC1) genes and a regulatory subunit of ribonucleotide reductase (RRM1), in predicting disease-free survival (DFS) of lung cancer patients after surgery. An image dataset involving 94 patients was used. Among them, 20 had cancer recurrence within 3 years, while 74 patients remained DFS. After tumor segmentation, 35 image features were computed from CT images. Using the Weka data mining software package, we selected 10 non-redundant image features. Applying a SMOTE algorithm to generate synthetic data to balance case numbers in two DFS ("yes" and "no") groups and a leave-one-case-out training/testing method, we optimized and compared a number of machine learning classifiers using (1) quantitative image (QI) features, (2) subjective rated (SR) features, and (3) genomic biomarkers (GB). Data analyses showed relatively lower correlation among the QI, SR and GB prediction results (with Pearson correlation coefficients < 0.5 including between ERCC1 and RRM1 biomarkers). By using area under ROC curve as an assessment index, the QI, SR and GB based classifiers yielded AUC = 0.89+/-0.04, 0.73+/-0.06 and 0.76+/-0.07, respectively, which showed that all three types of features had prediction power (AUC>0.5). Among them, using QI yielded the highest performance.

  9. Determining degree of optic nerve edema from color fundus photography

    NASA Astrophysics Data System (ADS)

    Agne, Jason; Wang, Jui-Kai; Kardon, Randy H.; Garvin, Mona K.

    2015-03-01

    Swelling of the optic nerve head (ONH) is subjectively assessed by clinicians using the Frisén scale. It is believed that a direct measurement of the ONH volume would serve as a better representation of the swelling. However, a direct measurement requires optic nerve imaging with spectral domain optical coherence tomography (SD-OCT) and 3D segmentation of the resulting images, which is not always available during clinical evaluation. Furthermore, telemedical imaging of the eye at remote locations is more feasible with non-mydriatic fundus cameras which are less costly than OCT imagers. Therefore, there is a critical need to develop a more quantitative analysis of optic nerve swelling on a continuous scale, similar to SD-OCT. Here, we select features from more commonly available 2D fundus images and use them to predict ONH volume. Twenty-six features were extracted from each of 48 color fundus images. The features include attributes of the blood vessels, optic nerve head, and peripapillary retina areas. These features were used in a regression analysis to predict ONH volume, as computed by a segmentation of the SD-OCT image. The results of the regression analysis yielded a mean square error of 2.43 mm3 and a correlation coefficient between computed and predicted volumes of R = 0:771, which suggests that ONH volume may be predicted from fundus features alone.

  10. MO-AB-BRA-10: Cancer Therapy Outcome Prediction Based On Dempster-Shafer Theory and PET Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lian, C; University of Rouen, QuantIF - EA 4108 LITIS, 76000 Rouen; Li, H

    2015-06-15

    Purpose: In cancer therapy, utilizing FDG-18 PET image-based features for accurate outcome prediction is challenging because of 1) limited discriminative information within a small number of PET image sets, and 2) fluctuant feature characteristics caused by the inferior spatial resolution and system noise of PET imaging. In this study, we proposed a new Dempster-Shafer theory (DST) based approach, evidential low-dimensional transformation with feature selection (ELT-FS), to accurately predict cancer therapy outcome with both PET imaging features and clinical characteristics. Methods: First, a specific loss function with sparse penalty was developed to learn an adaptive low-rank distance metric for representing themore » dissimilarity between different patients’ feature vectors. By minimizing this loss function, a linear low-dimensional transformation of input features was achieved. Also, imprecise features were excluded simultaneously by applying a l2,1-norm regularization of the learnt dissimilarity metric in the loss function. Finally, the learnt dissimilarity metric was applied in an evidential K-nearest-neighbor (EK- NN) classifier to predict treatment outcome. Results: Twenty-five patients with stage II–III non-small-cell lung cancer and thirty-six patients with esophageal squamous cell carcinomas treated with chemo-radiotherapy were collected. For the two groups of patients, 52 and 29 features, respectively, were utilized. The leave-one-out cross-validation (LOOCV) protocol was used for evaluation. Compared to three existing linear transformation methods (PCA, LDA, NCA), the proposed ELT-FS leads to higher prediction accuracy for the training and testing sets both for lung-cancer patients (100+/−0.0, 88.0+/−33.17) and for esophageal-cancer patients (97.46+/−1.64, 83.33+/−37.8). The ELT-FS also provides superior class separation in both test data sets. Conclusion: A novel DST- based approach has been proposed to predict cancer treatment outcome using PET image features and clinical characteristics. A specific loss function has been designed for robust accommodation of feature set incertitude and imprecision, facilitating adaptive learning of the dissimilarity metric for the EK-NN classifier.« less

  11. Predictive capabilities of statistical learning methods for lung nodule malignancy classification using diagnostic image features: an investigation using the Lung Image Database Consortium dataset

    NASA Astrophysics Data System (ADS)

    Hancock, Matthew C.; Magnan, Jerry F.

    2017-03-01

    To determine the potential usefulness of quantified diagnostic image features as inputs to a CAD system, we investigate the predictive capabilities of statistical learning methods for classifying nodule malignancy, utilizing the Lung Image Database Consortium (LIDC) dataset, and only employ the radiologist-assigned diagnostic feature values for the lung nodules therein, as well as our derived estimates of the diameter and volume of the nodules from the radiologists' annotations. We calculate theoretical upper bounds on the classification accuracy that is achievable by an ideal classifier that only uses the radiologist-assigned feature values, and we obtain an accuracy of 85.74 (+/-1.14)% which is, on average, 4.43% below the theoretical maximum of 90.17%. The corresponding area-under-the-curve (AUC) score is 0.932 (+/-0.012), which increases to 0.949 (+/-0.007) when diameter and volume features are included, along with the accuracy to 88.08 (+/-1.11)%. Our results are comparable to those in the literature that use algorithmically-derived image-based features, which supports our hypothesis that lung nodules can be classified as malignant or benign using only quantified, diagnostic image features, and indicates the competitiveness of this approach. We also analyze how the classification accuracy depends on specific features, and feature subsets, and we rank the features according to their predictive power, statistically demonstrating the top four to be spiculation, lobulation, subtlety, and calcification.

  12. Deformable image registration as a tool to improve survival prediction after neoadjuvant chemotherapy for breast cancer: results from the ACRIN 6657/I-SPY-1 trial

    NASA Astrophysics Data System (ADS)

    Jahani, Nariman; Cohen, Eric; Hsieh, Meng-Kang; Weinstein, Susan P.; Pantalone, Lauren; Davatzikos, Christos; Kontos, Despina

    2018-02-01

    We examined the ability of DCE-MRI longitudinal features to give early prediction of recurrence-free survival (RFS) in women undergoing neoadjuvant chemotherapy for breast cancer, in a retrospective analysis of 106 women from the ISPY 1 cohort. These features were based on the voxel-wise changes seen in registered images taken before treatment and after the first round of chemotherapy. We computed the transformation field using a robust deformable image registration technique to match breast images from these two visits. Using the deformation field, parametric response maps (PRM) — a voxel-based feature analysis of longitudinal changes in images between visits — was computed for maps of four kinetic features (signal enhancement ratio, peak enhancement, and wash-in/wash-out slopes). A two-level discrete wavelet transform was applied to these PRMs to extract heterogeneity information about tumor change between visits. To estimate survival, a Cox proportional hazard model was applied with the C statistic as the measure of success in predicting RFS. The best PRM feature (as determined by C statistic in univariable analysis) was determined for each of the four kinetic features. The baseline model, incorporating functional tumor volume, age, race, and hormone response status, had a C statistic of 0.70 in predicting RFS. The model augmented with the four PRM features had a C statistic of 0.76. Thus, our results suggest that adding information on the texture of voxel-level changes in tumor kinetic response between registered images of first and second visits could improve early RFS prediction in breast cancer after neoadjuvant chemotherapy.

  13. Radiomic analysis in prediction of Human Papilloma Virus status.

    PubMed

    Yu, Kaixian; Zhang, Youyi; Yu, Yang; Huang, Chao; Liu, Rongjie; Li, Tengfei; Yang, Liuqing; Morris, Jeffrey S; Baladandayuthapani, Veerabhadran; Zhu, Hongtu

    2017-12-01

    Human Papilloma Virus (HPV) has been associated with oropharyngeal cancer prognosis. Traditionally the HPV status is tested through invasive lab test. Recently, the rapid development of statistical image analysis techniques has enabled precise quantitative analysis of medical images. The quantitative analysis of Computed Tomography (CT) provides a non-invasive way to assess HPV status for oropharynx cancer patients. We designed a statistical radiomics approach analyzing CT images to predict HPV status. Various radiomics features were extracted from CT scans, and analyzed using statistical feature selection and prediction methods. Our approach ranked the highest in the 2016 Medical Image Computing and Computer Assisted Intervention (MICCAI) grand challenge: Oropharynx Cancer (OPC) Radiomics Challenge, Human Papilloma Virus (HPV) Status Prediction. Further analysis on the most relevant radiomic features distinguishing HPV positive and negative subjects suggested that HPV positive patients usually have smaller and simpler tumors.

  14. TU-AB-BRA-10: Prognostic Value of Intra-Radiation Treatment FDG-PET and CT Imaging Features in Locally Advanced Head and Neck Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, J; Pollom, E; Durkee, B

    2015-06-15

    Purpose: To predict response to radiation treatment using computational FDG-PET and CT images in locally advanced head and neck cancer (HNC). Methods: 68 patients with State III-IVB HNC treated with chemoradiation were included in this retrospective study. For each patient, we analyzed primary tumor and lymph nodes on PET and CT scans acquired both prior to and during radiation treatment, which led to 8 combinations of image datasets. From each image set, we extracted high-throughput, radiomic features of the following types: statistical, morphological, textural, histogram, and wavelet, resulting in a total of 437 features. We then performed unsupervised redundancy removalmore » and stability test on these features. To avoid over-fitting, we trained a logistic regression model with simultaneous feature selection based on least absolute shrinkage and selection operator (LASSO). To objectively evaluate the prediction ability, we performed 5-fold cross validation (CV) with 50 random repeats of stratified bootstrapping. Feature selection and model training was solely conducted on the training set and independently validated on the holdout test set. Receiver operating characteristic (ROC) curve of the pooled Result and the area under the ROC curve (AUC) was calculated as figure of merit. Results: For predicting local-regional recurrence, our model built on pre-treatment PET of lymph nodes achieved the best performance (AUC=0.762) on 5-fold CV, which compared favorably with node volume and SUVmax (AUC=0.704 and 0.449, p<0.001). Wavelet coefficients turned out to be the most predictive features. Prediction of distant recurrence showed a similar trend, in which pre-treatment PET features of lymph nodes had the highest AUC of 0.705. Conclusion: The radiomics approach identified novel imaging features that are predictive to radiation treatment response. If prospectively validated in larger cohorts, they could aid in risk-adaptive treatment of HNC.« less

  15. Large Margin Multi-Modal Multi-Task Feature Extraction for Image Classification.

    PubMed

    Yong Luo; Yonggang Wen; Dacheng Tao; Jie Gui; Chao Xu

    2016-01-01

    The features used in many image analysis-based applications are frequently of very high dimension. Feature extraction offers several advantages in high-dimensional cases, and many recent studies have used multi-task feature extraction approaches, which often outperform single-task feature extraction approaches. However, most of these methods are limited in that they only consider data represented by a single type of feature, even though features usually represent images from multiple modalities. We, therefore, propose a novel large margin multi-modal multi-task feature extraction (LM3FE) framework for handling multi-modal features for image classification. In particular, LM3FE simultaneously learns the feature extraction matrix for each modality and the modality combination coefficients. In this way, LM3FE not only handles correlated and noisy features, but also utilizes the complementarity of different modalities to further help reduce feature redundancy in each modality. The large margin principle employed also helps to extract strongly predictive features, so that they are more suitable for prediction (e.g., classification). An alternating algorithm is developed for problem optimization, and each subproblem can be efficiently solved. Experiments on two challenging real-world image data sets demonstrate the effectiveness and superiority of the proposed method.

  16. SU-F-R-52: A Comparison of the Performance of Radiomic Features From Free Breathing and 4DCT Scans in Predicting Disease Recurrence in Lung Cancer SBRT Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huynh, E; Coroller, T; Narayan, V

    Purpose: There is a clinical need to identify patients who are at highest risk of recurrence after being treated with stereotactic body radiation therapy (SBRT). Radiomics offers a non-invasive approach by extracting quantitative features from medical images based on tumor phenotype that is predictive of an outcome. Lung cancer patients treated with SBRT routinely undergo free breathing (FB image) and 4DCT (average intensity projection (AIP) image) scans for treatment planning to account for organ motion. The aim of the current study is to evaluate and compare the prognostic performance of radiomic features extracted from FB and AIP images in lungmore » cancer patients treated with SBRT to identify which image type would generate an optimal predictive model for recurrence. Methods: FB and AIP images of 113 Stage I-II NSCLC patients treated with SBRT were analysed. The prognostic performance of radiomic features for distant metastasis (DM) was evaluated by their concordance index (CI). Radiomic features were compared with conventional imaging metrics (e.g. diameter). All p-values were corrected for multiple testing using the false discovery rate. Results: All patients received SBRT and 20.4% of patients developed DM. From each image type (FB or AIP), nineteen radiomic features were selected based on stability and variance. Both image types had five common and fourteen different radiomic features. One FB (CI=0.70) and five AIP (CI range=0.65–0.68) radiomic features were significantly prognostic for DM (p<0.05). None of the conventional features derived from FB images (range CI=0.60–0.61) were significant but all AIP conventional features were (range CI=0.64–0.66). Conclusion: Features extracted from different types of CT scans have varying prognostic performances. AIP images contain more prognostic radiomic features for DM than FB images. These methods can provide personalized medicine approaches at low cost, as FB and AIP data are readily available within a large number of radiation oncology departments. R.M. had consulting interest with Amgen (ended in 2015).« less

  17. Learning representative features for facial images based on a modified principal component analysis

    NASA Astrophysics Data System (ADS)

    Averkin, Anton; Potapov, Alexey

    2013-05-01

    The paper is devoted to facial image analysis and particularly deals with the problem of automatic evaluation of the attractiveness of human faces. We propose a new approach for automatic construction of feature space based on a modified principal component analysis. Input data sets for the algorithm are the learning data sets of facial images, which are rated by one person. The proposed approach allows one to extract features of the individual subjective face beauty perception and to predict attractiveness values for new facial images, which were not included into a learning data set. The Pearson correlation coefficient between values predicted by our method for new facial images and personal attractiveness estimation values equals to 0.89. This means that the new approach proposed is promising and can be used for predicting subjective face attractiveness values in real systems of the facial images analysis.

  18. Sensor image prediction techniques

    NASA Astrophysics Data System (ADS)

    Stenger, A. J.; Stone, W. R.; Berry, L.; Murray, T. J.

    1981-02-01

    The preparation of prediction imagery is a complex, costly, and time consuming process. Image prediction systems which produce a detailed replica of the image area require the extensive Defense Mapping Agency data base. The purpose of this study was to analyze the use of image predictions in order to determine whether a reduced set of more compact image features contains enough information to produce acceptable navigator performance. A job analysis of the navigator's mission tasks was performed. It showed that the cognitive and perceptual tasks he performs during navigation are identical to those performed for the targeting mission function. In addition, the results of the analysis of his performance when using a particular sensor can be extended to the analysis of this mission tasks using any sensor. An experimental approach was used to determine the relationship between navigator performance and the type of amount of information in the prediction image. A number of subjects were given image predictions containing varying levels of scene detail and different image features, and then asked to identify the predicted targets in corresponding dynamic flight sequences over scenes of cultural, terrain, and mixed (both cultural and terrain) content.

  19. Radiomics-based Prognosis Analysis for Non-Small Cell Lung Cancer

    NASA Astrophysics Data System (ADS)

    Zhang, Yucheng; Oikonomou, Anastasia; Wong, Alexander; Haider, Masoom A.; Khalvati, Farzad

    2017-04-01

    Radiomics characterizes tumor phenotypes by extracting large numbers of quantitative features from radiological images. Radiomic features have been shown to provide prognostic value in predicting clinical outcomes in several studies. However, several challenges including feature redundancy, unbalanced data, and small sample sizes have led to relatively low predictive accuracy. In this study, we explore different strategies for overcoming these challenges and improving predictive performance of radiomics-based prognosis for non-small cell lung cancer (NSCLC). CT images of 112 patients (mean age 75 years) with NSCLC who underwent stereotactic body radiotherapy were used to predict recurrence, death, and recurrence-free survival using a comprehensive radiomics analysis. Different feature selection and predictive modeling techniques were used to determine the optimal configuration of prognosis analysis. To address feature redundancy, comprehensive analysis indicated that Random Forest models and Principal Component Analysis were optimum predictive modeling and feature selection methods, respectively, for achieving high prognosis performance. To address unbalanced data, Synthetic Minority Over-sampling technique was found to significantly increase predictive accuracy. A full analysis of variance showed that data endpoints, feature selection techniques, and classifiers were significant factors in affecting predictive accuracy, suggesting that these factors must be investigated when building radiomics-based predictive models for cancer prognosis.

  20. Mining hidden data to predict patient prognosis: texture feature extraction and machine learning in mammography

    NASA Astrophysics Data System (ADS)

    Leighs, J. A.; Halling-Brown, M. D.; Patel, M. N.

    2018-03-01

    The UK currently has a national breast cancer-screening program and images are routinely collected from a number of screening sites, representing a wealth of invaluable data that is currently under-used. Radiologists evaluate screening images manually and recall suspicious cases for further analysis such as biopsy. Histological testing of biopsy samples confirms the malignancy of the tumour, along with other diagnostic and prognostic characteristics such as disease grade. Machine learning is becoming increasingly popular for clinical image classification problems, as it is capable of discovering patterns in data otherwise invisible. This is particularly true when applied to medical imaging features; however clinical datasets are often relatively small. A texture feature extraction toolkit has been developed to mine a wide range of features from medical images such as mammograms. This study analysed a dataset of 1,366 radiologist-marked, biopsy-proven malignant lesions obtained from the OPTIMAM Medical Image Database (OMI-DB). Exploratory data analysis methods were employed to better understand extracted features. Machine learning techniques including Classification and Regression Trees (CART), ensemble methods (e.g. random forests), and logistic regression were applied to the data to predict the disease grade of the analysed lesions. Prediction scores of up to 83% were achieved; sensitivity and specificity of the models trained have been discussed to put the results into a clinical context. The results show promise in the ability to predict prognostic indicators from the texture features extracted and thus enable prioritisation of care for patients at greatest risk.

  1. Inferring diagnosis and trajectory of wet age-related macular degeneration from OCT imagery of retina

    NASA Astrophysics Data System (ADS)

    Irvine, John M.; Ghadar, Nastaran; Duncan, Steve; Floyd, David; O'Dowd, David; Lin, Kristie; Chang, Tom

    2017-03-01

    Quantitative biomarkers for assessing the presence, severity, and progression of age-related macular degeneration (AMD) would benefit research, diagnosis, and treatment. This paper explores development of quantitative biomarkers derived from OCT imagery of the retina. OCT images for approximately 75 patients with Wet AMD, Dry AMD, and no AMD (healthy eyes) were analyzed to identify image features indicative of the patients' conditions. OCT image features provide a statistical characterization of the retina. Healthy eyes exhibit a layered structure, whereas chaotic patterns indicate the deterioration associated with AMD. Our approach uses wavelet and Frangi filtering, combined with statistical features that do not rely on image segmentation, to assess patient conditions. Classification analysis indicates clear separability of Wet AMD from other conditions, including Dry AMD and healthy retinas. The probability of correct classification of was 95.7%, as determined from cross validation. Similar classification analysis predicts the response of Wet AMD patients to treatment, as measured by the Best Corrected Visual Acuity (BCVA). A statistical model predicts BCVA from the imagery features with R2 = 0.846. Initial analysis of OCT imagery indicates that imagery-derived features can provide useful biomarkers for characterization and quantification of AMD: Accurate assessment of Wet AMD compared to other conditions; image-based prediction of outcome for Wet AMD treatment; and features derived from the OCT imagery accurately predict BCVA; unlike many methods in the literature, our techniques do not rely on segmentation of the OCT image. Next steps include larger scale testing and validation.

  2. Apply radiomics approach for early stage prognostic evaluation of ovarian cancer patients: a preliminary study

    NASA Astrophysics Data System (ADS)

    Danala, Gopichandh; Wang, Yunzhi; Thai, Theresa; Gunderson, Camille; Moxley, Katherine; Moore, Kathleen; Mannel, Robert; Liu, Hong; Zheng, Bin; Qiu, Yuchen

    2017-03-01

    Predicting metastatic tumor response to chemotherapy at early stage is critically important for improving efficacy of clinical trials of testing new chemotherapy drugs. However, using current response evaluation criteria in solid tumors (RECIST) guidelines only yields a limited accuracy to predict tumor response. In order to address this clinical challenge, we applied Radiomics approach to develop a new quantitative image analysis scheme, aiming to accurately assess the tumor response to new chemotherapy treatment, for the advanced ovarian cancer patients. During the experiment, a retrospective dataset containing 57 patients was assembled, each of which has two sets of CT images: pre-therapy and 4-6 week follow up CT images. A Radiomics based image analysis scheme was then applied on these images, which is composed of three steps. First, the tumors depicted on the CT images were segmented by a hybrid tumor segmentation scheme. Then, a total of 115 features were computed from the segmented tumors, which can be grouped as 1) volume based features; 2) density based features; and 3) wavelet features. Finally, an optimal feature cluster was selected based on the single feature performance and an equal-weighed fusion rule was applied to generate the final predicting score. The results demonstrated that the single feature achieved an area under the receiver operating characteristic curve (AUC) of 0.838+/-0.053. This investigation demonstrates that the Radiomic approach may have the potential in the development of high accuracy predicting model for early stage prognostic assessment of ovarian cancer patients.

  3. Prediction of pork loin quality using online computer vision system and artificial intelligence model.

    PubMed

    Sun, Xin; Young, Jennifer; Liu, Jeng-Hung; Newman, David

    2018-06-01

    The objective of this project was to develop a computer vision system (CVS) for objective measurement of pork loin under industry speed requirement. Color images of pork loin samples were acquired using a CVS. Subjective color and marbling scores were determined according to the National Pork Board standards by a trained evaluator. Instrument color measurement and crude fat percentage were used as control measurements. Image features (18 color features; 1 marbling feature; 88 texture features) were extracted from whole pork loin color images. Artificial intelligence prediction model (support vector machine) was established for pork color and marbling quality grades. The results showed that CVS with support vector machine modeling reached the highest prediction accuracy of 92.5% for measured pork color score and 75.0% for measured pork marbling score. This research shows that the proposed artificial intelligence prediction model with CVS can provide an effective tool for predicting color and marbling in the pork industry at online speeds. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Magnetization-prepared rapid acquisition with gradient echo magnetic resonance imaging signal and texture features for the prediction of mild cognitive impairment to Alzheimer's disease progression.

    PubMed

    Martinez-Torteya, Antonio; Rodriguez-Rojas, Juan; Celaya-Padilla, José M; Galván-Tejada, Jorge I; Treviño, Victor; Tamez-Peña, Jose

    2014-10-01

    Early diagnoses of Alzheimer's disease (AD) would confer many benefits. Several biomarkers have been proposed to achieve such a task, where features extracted from magnetic resonance imaging (MRI) have played an important role. However, studies have focused exclusively on morphological characteristics. This study aims to determine whether features relating to the signal and texture of the image could predict mild cognitive impairment (MCI) to AD progression. Clinical, biological, and positron emission tomography information and MRI images of 62 subjects from the AD neuroimaging initiative were used in this study, extracting 4150 features from each MRI. Within this multimodal database, a feature selection algorithm was used to obtain an accurate and small logistic regression model, generated by a methodology that yielded a mean blind test accuracy of 0.79. This model included six features, five of them obtained from the MRI images, and one obtained from genotyping. A risk analysis divided the subjects into low-risk and high-risk groups according to a prognostic index. The groups were statistically different ([Formula: see text]). These results demonstrated that MRI features related to both signal and texture add MCI to AD predictive power, and supported the ongoing notion that multimodal biomarkers outperform single-modality ones.

  5. No-Reference Image Quality Assessment by Wide-Perceptual-Domain Scorer Ensemble Method.

    PubMed

    Liu, Tsung-Jung; Liu, Kuan-Hsien

    2018-03-01

    A no-reference (NR) learning-based approach to assess image quality is presented in this paper. The devised features are extracted from wide perceptual domains, including brightness, contrast, color, distortion, and texture. These features are used to train a model (scorer) which can predict scores. The scorer selection algorithms are utilized to help simplify the proposed system. In the final stage, the ensemble method is used to combine the prediction results from selected scorers. Two multiple-scale versions of the proposed approach are also presented along with the single-scale one. They turn out to have better performances than the original single-scale method. Because of having features from five different domains at multiple image scales and using the outputs (scores) from selected score prediction models as features for multi-scale or cross-scale fusion (i.e., ensemble), the proposed NR image quality assessment models are robust with respect to more than 24 image distortion types. They also can be used on the evaluation of images with authentic distortions. The extensive experiments on three well-known and representative databases confirm the performance robustness of our proposed model.

  6. TU-D-207B-03: Early Assessment of Response to Chemoradiotherapy Based On Textural Analysis of Pre and Mid-Treatment FDG-PET Image in Locally Advanced Head and Neck Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Y; Pollom, E; Loo, B

    Purpose: To evaluate whether tumor textural features extracted from both pre- and mid-treatment FDG-PET images predict early response to chemoradiotherapy in locally advanced head and neck cancer, and investigate whether they provide complementary value to conventional volume-based measurements. Methods: Ninety-four patients with locally advanced head and neck cancers were retrospectively studied. All patients received definitive chemoradiotherapy and underwent FDG-PET planning scans both before and during treatment. Within the primary tumor we extracted 6 textural features based on gray-level co-occurrence matrices (GLCM): entropy, dissimilarity, contrast, correlation, energy, and homogeneity. These image features were evaluated for their predictive power of treatment responsemore » to chemoradiotherapy in terms of local recurrence free survival (LRFS) and progression free survival (PFS). Logrank test were used to assess the statistical significance of the stratification between low- and high-risk groups. P-values were adjusted for multiple comparisons by the false discovery rate (FDR) method. Results: All six textural features extracted from pre-treatment PET images significantly differentiated low- and high-risk patient groups for LRFS (P=0.011–0.038) and PFS (P=0.029–0.034). On the other hand, none of the textural features on mid-treatment PET images was statistically significant in stratifying LRFS (P=0.212–0.445) or PFS (P=0.168–0.299). An imaging signature that combines textural feature (GLCM homogeneity) and metabolic tumor volume showed an improved performance for predicting LRFS (hazard ratio: 22.8, P<0.0001) and PFS (hazard ratio: 13.9, P=0.0005) in leave-one-out cross validation. Intra-tumor heterogeneity measured by textural features was significantly lower in mid-treatment PET images than in pre-treatment PET images (T-test: P<1.4e-6). Conclusion: Tumor textural features on pretreatment FDG-PET images are predictive for response to chemoradiotherapy in locally advanced head and neck cancer. The complementary information offered by textural features improves patient stratification and may potentially aid in personalized risk-adaptive therapy.« less

  7. Predicting the amount of coke deposition on catalyst pellets through image analysis and soft computing

    NASA Astrophysics Data System (ADS)

    Zhang, Jingqiong; Zhang, Wenbiao; He, Yuting; Yan, Yong

    2016-11-01

    The amount of coke deposition on catalyst pellets is one of the most important indexes of catalytic property and service life. As a result, it is essential to measure this and analyze the active state of the catalysts during a continuous production process. This paper proposes a new method to predict the amount of coke deposition on catalyst pellets based on image analysis and soft computing. An image acquisition system consisting of a flatbed scanner and an opaque cover is used to obtain catalyst images. After imaging processing and feature extraction, twelve effective features are selected and two best feature sets are determined by the prediction tests. A neural network optimized by a particle swarm optimization algorithm is used to establish the prediction model of the coke amount based on various datasets. The root mean square error of the prediction values are all below 0.021 and the coefficient of determination R 2, for the model, are all above 78.71%. Therefore, a feasible, effective and precise method is demonstrated, which may be applied to realize the real-time measurement of coke deposition based on on-line sampling and fast image analysis.

  8. Quantitative imaging features: extension of the oncology medical image database

    NASA Astrophysics Data System (ADS)

    Patel, M. N.; Looney, P. T.; Young, K. C.; Halling-Brown, M. D.

    2015-03-01

    Radiological imaging is fundamental within the healthcare industry and has become routinely adopted for diagnosis, disease monitoring and treatment planning. With the advent of digital imaging modalities and the rapid growth in both diagnostic and therapeutic imaging, the ability to be able to harness this large influx of data is of paramount importance. The Oncology Medical Image Database (OMI-DB) was created to provide a centralized, fully annotated dataset for research. The database contains both processed and unprocessed images, associated data, and annotations and where applicable expert determined ground truths describing features of interest. Medical imaging provides the ability to detect and localize many changes that are important to determine whether a disease is present or a therapy is effective by depicting alterations in anatomic, physiologic, biochemical or molecular processes. Quantitative imaging features are sensitive, specific, accurate and reproducible imaging measures of these changes. Here, we describe an extension to the OMI-DB whereby a range of imaging features and descriptors are pre-calculated using a high throughput approach. The ability to calculate multiple imaging features and data from the acquired images would be valuable and facilitate further research applications investigating detection, prognosis, and classification. The resultant data store contains more than 10 million quantitative features as well as features derived from CAD predictions. Theses data can be used to build predictive models to aid image classification, treatment response assessment as well as to identify prognostic imaging biomarkers.

  9. Prediction of survival with multi-scale radiomic analysis in glioblastoma patients.

    PubMed

    Chaddad, Ahmad; Sabri, Siham; Niazi, Tamim; Abdulkarim, Bassam

    2018-06-19

    We propose a multiscale texture features based on Laplacian-of Gaussian (LoG) filter to predict progression free (PFS) and overall survival (OS) in patients newly diagnosed with glioblastoma (GBM). Experiments use the extracted features derived from 40 patients of GBM with T1-weighted imaging (T1-WI) and Fluid-attenuated inversion recovery (FLAIR) images that were segmented manually into areas of active tumor, necrosis, and edema. Multiscale texture features were extracted locally from each of these areas of interest using a LoG filter and the relation between features to OS and PFS was investigated using univariate (i.e., Spearman's rank correlation coefficient, log-rank test and Kaplan-Meier estimator) and multivariate analyses (i.e., Random Forest classifier). Three and seven features were statistically correlated with PFS and OS, respectively, with absolute correlation values between 0.32 and 0.36 and p < 0.05. Three features derived from active tumor regions only were associated with OS (p < 0.05) with hazard ratios (HR) of 2.9, 3, and 3.24, respectively. Combined features showed an AUC value of 85.37 and 85.54% for predicting the PFS and OS of GBM patients, respectively, using the random forest (RF) classifier. We presented a multiscale texture features to characterize the GBM regions and predict he PFS and OS. The efficiency achievable suggests that this technique can be developed into a GBM MR analysis system suitable for clinical use after a thorough validation involving more patients. Graphical abstract Scheme of the proposed model for characterizing the heterogeneity of GBM regions and predicting the overall survival and progression free survival of GBM patients. (1) Acquisition of pretreatment MRI images; (2) Affine registration of T1-WI image with its corresponding FLAIR images, and GBM subtype (phenotypes) labelling; (3) Extraction of nine texture features from the three texture scales fine, medium, and coarse derived from each of GBM regions; (4) Comparing heterogeneity between GBM regions by ANOVA test; Survival analysis using Univariate (Spearman rank correlation between features and survival (i.e., PFS and OS) based on each of the GBM regions, Kaplan-Meier estimator and log-rank test to predict the PFS and OS of patient groups that grouped based on median of feature), and multivariate (random forest model) for predicting the PFS and OS of patients groups that grouped based on median of PFS and OS.

  10. Multi-center prediction of hemorrhagic transformation in acute ischemic stroke using permeability imaging features.

    PubMed

    Scalzo, Fabien; Alger, Jeffry R; Hu, Xiao; Saver, Jeffrey L; Dani, Krishna A; Muir, Keith W; Demchuk, Andrew M; Coutts, Shelagh B; Luby, Marie; Warach, Steven; Liebeskind, David S

    2013-07-01

    Permeability images derived from magnetic resonance (MR) perfusion images are sensitive to blood-brain barrier derangement of the brain tissue and have been shown to correlate with subsequent development of hemorrhagic transformation (HT) in acute ischemic stroke. This paper presents a multi-center retrospective study that evaluates the predictive power in terms of HT of six permeability MRI measures including contrast slope (CS), final contrast (FC), maximum peak bolus concentration (MPB), peak bolus area (PB), relative recirculation (rR), and percentage recovery (%R). Dynamic T2*-weighted perfusion MR images were collected from 263 acute ischemic stroke patients from four medical centers. An essential aspect of this study is to exploit a classifier-based framework to automatically identify predictive patterns in the overall intensity distribution of the permeability maps. The model is based on normalized intensity histograms that are used as input features to the predictive model. Linear and nonlinear predictive models are evaluated using a cross-validation to measure generalization power on new patients and a comparative analysis is provided for the different types of parameters. Results demonstrate that perfusion imaging in acute ischemic stroke can predict HT with an average accuracy of more than 85% using a predictive model based on a nonlinear regression model. Results also indicate that the permeability feature based on the percentage of recovery performs significantly better than the other features. This novel model may be used to refine treatment decisions in acute stroke. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Particle Pollution Estimation Based on Image Analysis

    PubMed Central

    Liu, Chenbin; Tsow, Francis; Zou, Yi; Tao, Nongjian

    2016-01-01

    Exposure to fine particles can cause various diseases, and an easily accessible method to monitor the particles can help raise public awareness and reduce harmful exposures. Here we report a method to estimate PM air pollution based on analysis of a large number of outdoor images available for Beijing, Shanghai (China) and Phoenix (US). Six image features were extracted from the images, which were used, together with other relevant data, such as the position of the sun, date, time, geographic information and weather conditions, to predict PM2.5 index. The results demonstrate that the image analysis method provides good prediction of PM2.5 indexes, and different features have different significance levels in the prediction. PMID:26828757

  12. Particle Pollution Estimation Based on Image Analysis.

    PubMed

    Liu, Chenbin; Tsow, Francis; Zou, Yi; Tao, Nongjian

    2016-01-01

    Exposure to fine particles can cause various diseases, and an easily accessible method to monitor the particles can help raise public awareness and reduce harmful exposures. Here we report a method to estimate PM air pollution based on analysis of a large number of outdoor images available for Beijing, Shanghai (China) and Phoenix (US). Six image features were extracted from the images, which were used, together with other relevant data, such as the position of the sun, date, time, geographic information and weather conditions, to predict PM2.5 index. The results demonstrate that the image analysis method provides good prediction of PM2.5 indexes, and different features have different significance levels in the prediction.

  13. Robust tumor morphometry in multispectral fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Tabesh, Ali; Vengrenyuk, Yevgen; Teverovskiy, Mikhail; Khan, Faisal M.; Sapir, Marina; Powell, Douglas; Mesa-Tejada, Ricardo; Donovan, Michael J.; Fernandez, Gerardo

    2009-02-01

    Morphological and architectural characteristics of primary tissue compartments, such as epithelial nuclei (EN) and cytoplasm, provide important cues for cancer diagnosis, prognosis, and therapeutic response prediction. We propose two feature sets for the robust quantification of these characteristics in multiplex immunofluorescence (IF) microscopy images of prostate biopsy specimens. To enable feature extraction, EN and cytoplasm regions were first segmented from the IF images. Then, feature sets consisting of the characteristics of the minimum spanning tree (MST) connecting the EN and the fractal dimension (FD) of gland boundaries were obtained from the segmented compartments. We demonstrated the utility of the proposed features in prostate cancer recurrence prediction on a multi-institution cohort of 1027 patients. Univariate analysis revealed that both FD and one of the MST features were highly effective for predicting cancer recurrence (p <= 0.0001). In multivariate analysis, an MST feature was selected for a model incorporating clinical and image features. The model achieved a concordance index (CI) of 0.73 on the validation set, which was significantly higher than the CI of 0.69 for the standard multivariate model based solely on clinical features currently used in clinical practice (p < 0.0001). The contributions of this work are twofold. First, it is the first demonstration of the utility of the proposed features in morphometric analysis of IF images. Second, this is the largest scale study of the efficacy and robustness of the proposed features in prostate cancer prognosis.

  14. Use of a Machine-learning Method for Predicting Highly Cited Articles Within General Radiology Journals.

    PubMed

    Rosenkrantz, Andrew B; Doshi, Ankur M; Ginocchio, Luke A; Aphinyanaphongs, Yindalon

    2016-12-01

    This study aimed to assess the performance of a text classification machine-learning model in predicting highly cited articles within the recent radiological literature and to identify the model's most influential article features. We downloaded from PubMed the title, abstract, and medical subject heading terms for 10,065 articles published in 25 general radiology journals in 2012 and 2013. Three machine-learning models were applied to predict the top 10% of included articles in terms of the number of citations to the article in 2014 (reflecting the 2-year time window in conventional impact factor calculations). The model having the highest area under the curve was selected to derive a list of article features (words) predicting high citation volume, which was iteratively reduced to identify the smallest possible core feature list maintaining predictive power. Overall themes were qualitatively assigned to the core features. The regularized logistic regression (Bayesian binary regression) model had highest performance, achieving an area under the curve of 0.814 in predicting articles in the top 10% of citation volume. We reduced the initial 14,083 features to 210 features that maintain predictivity. These features corresponded with topics relating to various imaging techniques (eg, diffusion-weighted magnetic resonance imaging, hyperpolarized magnetic resonance imaging, dual-energy computed tomography, computed tomography reconstruction algorithms, tomosynthesis, elastography, and computer-aided diagnosis), particular pathologies (prostate cancer; thyroid nodules; hepatic adenoma, hepatocellular carcinoma, non-alcoholic fatty liver disease), and other topics (radiation dose, electroporation, education, general oncology, gadolinium, statistics). Machine learning can be successfully applied to create specific feature-based models for predicting articles likely to achieve high influence within the radiological literature. Copyright © 2016 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  15. Online prediction of organileptic data for snack food using color images

    NASA Astrophysics Data System (ADS)

    Yu, Honglu; MacGregor, John F.

    2004-11-01

    In this paper, a study for the prediction of organileptic properties of snack food in real-time using RGB color images is presented. The so-called organileptic properties, which are properties based on texture, taste and sight, are generally measured either by human sensory response or by mechanical devices. Neither of these two methods can be used for on-line feedback control in high-speed production. In this situation, a vision-based soft sensor is very attractive. By taking images of the products, the samples remain untouched and the product properties can be predicted in real time from image data. Four types of organileptic properties are considered in this study: blister level, toast points, taste and peak break force. Wavelet transform are applied on the color images and the averaged absolute value for each filtered image is used as texture feature variable. In order to handle the high correlation among the feature variables, Partial Least Squares (PLS) is used to regress the extracted feature variables against the four response variables.

  16. Magnetization-prepared rapid acquisition with gradient echo magnetic resonance imaging signal and texture features for the prediction of mild cognitive impairment to Alzheimer’s disease progression

    PubMed Central

    Martinez-Torteya, Antonio; Rodriguez-Rojas, Juan; Celaya-Padilla, José M.; Galván-Tejada, Jorge I.; Treviño, Victor; Tamez-Peña, Jose

    2014-01-01

    Abstract. Early diagnoses of Alzheimer’s disease (AD) would confer many benefits. Several biomarkers have been proposed to achieve such a task, where features extracted from magnetic resonance imaging (MRI) have played an important role. However, studies have focused exclusively on morphological characteristics. This study aims to determine whether features relating to the signal and texture of the image could predict mild cognitive impairment (MCI) to AD progression. Clinical, biological, and positron emission tomography information and MRI images of 62 subjects from the AD neuroimaging initiative were used in this study, extracting 4150 features from each MRI. Within this multimodal database, a feature selection algorithm was used to obtain an accurate and small logistic regression model, generated by a methodology that yielded a mean blind test accuracy of 0.79. This model included six features, five of them obtained from the MRI images, and one obtained from genotyping. A risk analysis divided the subjects into low-risk and high-risk groups according to a prognostic index. The groups were statistically different (p-value=2.04e−11). These results demonstrated that MRI features related to both signal and texture add MCI to AD predictive power, and supported the ongoing notion that multimodal biomarkers outperform single-modality ones. PMID:26158047

  17. Evaluation of chemotherapy response in ovarian cancer treatment using quantitative CT image biomarkers: a preliminary study

    NASA Astrophysics Data System (ADS)

    Qiu, Yuchen; Tan, Maxine; McMeekin, Scott; Thai, Theresa; Moore, Kathleen; Ding, Kai; Liu, Hong; Zheng, Bin

    2015-03-01

    The purpose of this study is to identify and apply quantitative image biomarkers for early prediction of the tumor response to the chemotherapy among the ovarian cancer patients participated in the clinical trials of testing new drugs. In the experiment, we retrospectively selected 30 cases from the patients who participated in Phase I clinical trials of new drug or drug agents for ovarian cancer treatment. Each case is composed of two sets of CT images acquired pre- and post-treatment (4-6 weeks after starting treatment). A computer-aided detection (CAD) scheme was developed to extract and analyze the quantitative image features of the metastatic tumors previously tracked by the radiologists using the standard Response Evaluation Criteria in Solid Tumors (RECIST) guideline. The CAD scheme first segmented 3-D tumor volumes from the background using a hybrid tumor segmentation scheme. Then, for each segmented tumor, CAD computed three quantitative image features including the change of tumor volume, tumor CT number (density) and density variance. The feature changes were calculated between the matched tumors tracked on the CT images acquired pre- and post-treatments. Finally, CAD predicted patient's 6-month progression-free survival (PFS) using a decision-tree based classifier. The performance of the CAD scheme was compared with the RECIST category. The result shows that the CAD scheme achieved a prediction accuracy of 76.7% (23/30 cases) with a Kappa coefficient of 0.493, which is significantly higher than the performance of RECIST prediction with a prediction accuracy and Kappa coefficient of 60% (17/30) and 0.062, respectively. This study demonstrated the feasibility of analyzing quantitative image features to improve the early predicting accuracy of the tumor response to the new testing drugs or therapeutic methods for the ovarian cancer patients.

  18. Insights into multimodal imaging classification of ADHD

    PubMed Central

    Colby, John B.; Rudie, Jeffrey D.; Brown, Jesse A.; Douglas, Pamela K.; Cohen, Mark S.; Shehzad, Zarrar

    2012-01-01

    Attention deficit hyperactivity disorder (ADHD) currently is diagnosed in children by clinicians via subjective ADHD-specific behavioral instruments and by reports from the parents and teachers. Considering its high prevalence and large economic and societal costs, a quantitative tool that aids in diagnosis by characterizing underlying neurobiology would be extremely valuable. This provided motivation for the ADHD-200 machine learning (ML) competition, a multisite collaborative effort to investigate imaging classifiers for ADHD. Here we present our ML approach, which used structural and functional magnetic resonance imaging data, combined with demographic information, to predict diagnostic status of individuals with ADHD from typically developing (TD) children across eight different research sites. Structural features included quantitative metrics from 113 cortical and non-cortical regions. Functional features included Pearson correlation functional connectivity matrices, nodal and global graph theoretical measures, nodal power spectra, voxelwise global connectivity, and voxelwise regional homogeneity. We performed feature ranking for each site and modality using the multiple support vector machine recursive feature elimination (SVM-RFE) algorithm, and feature subset selection by optimizing the expected generalization performance of a radial basis function kernel SVM (RBF-SVM) trained across a range of the top features. Site-specific RBF-SVMs using these optimal feature sets from each imaging modality were used to predict the class labels of an independent hold-out test set. A voting approach was used to combine these multiple predictions and assign final class labels. With this methodology we were able to predict diagnosis of ADHD with 55% accuracy (versus a 39% chance level in this sample), 33% sensitivity, and 80% specificity. This approach also allowed us to evaluate predictive structural and functional features giving insight into abnormal brain circuitry in ADHD. PMID:22912605

  19. Non-parametric adaptative JPEG fragments carving

    NASA Astrophysics Data System (ADS)

    Amrouche, Sabrina Cherifa; Salamani, Dalila

    2018-04-01

    The most challenging JPEG recovery tasks arise when the file header is missing. In this paper we propose to use a two layer machine learning model to restore headerless JPEG images. We first build a classifier able to identify the structural properties of the images/fragments and then use an AutoEncoder (AE) to learn the fragment features for the header prediction. We define a JPEG universal header and the remaining free image parameters (Height, Width) are predicted with a Gradient Boosting Classifier. Our approach resulted in 90% accuracy using the manually defined features and 78% accuracy using the AE features.

  20. Fish swarm intelligent to optimize real time monitoring of chips drying using machine vision

    NASA Astrophysics Data System (ADS)

    Hendrawan, Y.; Hawa, L. C.; Damayanti, R.

    2018-03-01

    This study attempted to apply machine vision-based chips drying monitoring system which is able to optimise the drying process of cassava chips. The objective of this study is to propose fish swarm intelligent (FSI) optimization algorithms to find the most significant set of image features suitable for predicting water content of cassava chips during drying process using artificial neural network model (ANN). Feature selection entails choosing the feature subset that maximizes the prediction accuracy of ANN. Multi-Objective Optimization (MOO) was used in this study which consisted of prediction accuracy maximization and feature-subset size minimization. The results showed that the best feature subset i.e. grey mean, L(Lab) Mean, a(Lab) energy, red entropy, hue contrast, and grey homogeneity. The best feature subset has been tested successfully in ANN model to describe the relationship between image features and water content of cassava chips during drying process with R2 of real and predicted data was equal to 0.9.

  1. A new approach to modeling the influence of image features on fixation selection in scenes

    PubMed Central

    Nuthmann, Antje; Einhäuser, Wolfgang

    2015-01-01

    Which image characteristics predict where people fixate when memorizing natural images? To answer this question, we introduce a new analysis approach that combines a novel scene-patch analysis with generalized linear mixed models (GLMMs). Our method allows for (1) directly describing the relationship between continuous feature value and fixation probability, and (2) assessing each feature's unique contribution to fixation selection. To demonstrate this method, we estimated the relative contribution of various image features to fixation selection: luminance and luminance contrast (low-level features); edge density (a mid-level feature); visual clutter and image segmentation to approximate local object density in the scene (higher-level features). An additional predictor captured the central bias of fixation. The GLMM results revealed that edge density, clutter, and the number of homogenous segments in a patch can independently predict whether image patches are fixated or not. Importantly, neither luminance nor contrast had an independent effect above and beyond what could be accounted for by the other predictors. Since the parcellation of the scene and the selection of features can be tailored to the specific research question, our approach allows for assessing the interplay of various factors relevant for fixation selection in scenes in a powerful and flexible manner. PMID:25752239

  2. Predicting plant biomass accumulation from image-derived parameters

    PubMed Central

    Chen, Dijun; Shi, Rongli; Pape, Jean-Michel; Neumann, Kerstin; Graner, Andreas; Chen, Ming; Klukas, Christian

    2018-01-01

    Abstract Background Image-based high-throughput phenotyping technologies have been rapidly developed in plant science recently, and they provide a great potential to gain more valuable information than traditionally destructive methods. Predicting plant biomass is regarded as a key purpose for plant breeders and ecologists. However, it is a great challenge to find a predictive biomass model across experiments. Results In the present study, we constructed 4 predictive models to examine the quantitative relationship between image-based features and plant biomass accumulation. Our methodology has been applied to 3 consecutive barley (Hordeum vulgare) experiments with control and stress treatments. The results proved that plant biomass can be accurately predicted from image-based parameters using a random forest model. The high prediction accuracy based on this model will contribute to relieving the phenotyping bottleneck in biomass measurement in breeding applications. The prediction performance is still relatively high across experiments under similar conditions. The relative contribution of individual features for predicting biomass was further quantified, revealing new insights into the phenotypic determinants of the plant biomass outcome. Furthermore, methods could also be used to determine the most important image-based features related to plant biomass accumulation, which would be promising for subsequent genetic mapping to uncover the genetic basis of biomass. Conclusions We have developed quantitative models to accurately predict plant biomass accumulation from image data. We anticipate that the analysis results will be useful to advance our views of the phenotypic determinants of plant biomass outcome, and the statistical methods can be broadly used for other plant species. PMID:29346559

  3. A new breast cancer risk analysis approach using features extracted from multiple sub-regions on bilateral mammograms

    NASA Astrophysics Data System (ADS)

    Sun, Wenqing; Tseng, Tzu-Liang B.; Zheng, Bin; Zhang, Jianying; Qian, Wei

    2015-03-01

    A novel breast cancer risk analysis approach is proposed for enhancing performance of computerized breast cancer risk analysis using bilateral mammograms. Based on the intensity of breast area, five different sub-regions were acquired from one mammogram, and bilateral features were extracted from every sub-region. Our dataset includes 180 bilateral mammograms from 180 women who underwent routine screening examinations, all interpreted as negative and not recalled by the radiologists during the original screening procedures. A computerized breast cancer risk analysis scheme using four image processing modules, including sub-region segmentation, bilateral feature extraction, feature selection, and classification was designed to detect and compute image feature asymmetry between the left and right breasts imaged on the mammograms. The highest computed area under the curve (AUC) is 0.763 ± 0.021 when applying the multiple sub-region features to our testing dataset. The positive predictive value and the negative predictive value were 0.60 and 0.73, respectively. The study demonstrates that (1) features extracted from multiple sub-regions can improve the performance of our scheme compared to using features from whole breast area only; (2) a classifier using asymmetry bilateral features can effectively predict breast cancer risk; (3) incorporating texture and morphological features with density features can boost the classification accuracy.

  4. Fine-tuning convolutional deep features for MRI based brain tumor classification

    NASA Astrophysics Data System (ADS)

    Ahmed, Kaoutar B.; Hall, Lawrence O.; Goldgof, Dmitry B.; Liu, Renhao; Gatenby, Robert A.

    2017-03-01

    Prediction of survival time from brain tumor magnetic resonance images (MRI) is not commonly performed and would ordinarily be a time consuming process. However, current cross-sectional imaging techniques, particularly MRI, can be used to generate many features that may provide information on the patient's prognosis, including survival. This information can potentially be used to identify individuals who would benefit from more aggressive therapy. Rather than using pre-defined and hand-engineered features as with current radiomics methods, we investigated the use of deep features extracted from pre-trained convolutional neural networks (CNNs) in predicting survival time. We also provide evidence for the power of domain specific fine-tuning in improving the performance of a pre-trained CNN's, even though our data set is small. We fine-tuned a CNN initially trained on a large natural image recognition dataset (Imagenet ILSVRC) and transferred the learned feature representations to the survival time prediction task, obtaining over 81% accuracy in a leave one out cross validation.

  5. Radiogenomics of hepatocellular carcinoma: multiregion analysis-based identification of prognostic imaging biomarkers by integrating gene data—a preliminary study

    NASA Astrophysics Data System (ADS)

    Xia, Wei; Chen, Ying; Zhang, Rui; Yan, Zhuangzhi; Zhou, Xiaobo; Zhang, Bo; Gao, Xin

    2018-02-01

    Our objective was to identify prognostic imaging biomarkers for hepatocellular carcinoma in contrast-enhanced computed tomography (CECT) with biological interpretations by associating imaging features and gene modules. We retrospectively analyzed 371 patients who had gene expression profiles. For the 38 patients with CECT imaging data, automatic intra-tumor partitioning was performed, resulting in three spatially distinct subregions. We extracted a total of 37 quantitative imaging features describing intensity, geometry, and texture from each subregion. Imaging features were selected after robustness and redundancy analysis. Gene modules acquired from clustering were chosen for their prognostic significance. By constructing an association map between imaging features and gene modules with Spearman rank correlations, the imaging features that significantly correlated with gene modules were obtained. These features were evaluated with Cox’s proportional hazard models and Kaplan-Meier estimates to determine their prognostic capabilities for overall survival (OS). Eight imaging features were significantly correlated with prognostic gene modules, and two of them were associated with OS. Among these, the geometry feature volume fraction of the subregion, which was significantly correlated with all prognostic gene modules representing cancer-related interpretation, was predictive of OS (Cox p  =  0.022, hazard ratio  =  0.24). The texture feature cluster prominence in the subregion, which was correlated with the prognostic gene module representing lipid metabolism and complement activation, also had the ability to predict OS (Cox p  =  0.021, hazard ratio  =  0.17). Imaging features depicting the volume fraction and textural heterogeneity in subregions have the potential to be predictors of OS with interpretable biological meaning.

  6. Predicting perceptual quality of images in realistic scenario using deep filter banks

    NASA Astrophysics Data System (ADS)

    Zhang, Weixia; Yan, Jia; Hu, Shiyong; Ma, Yang; Deng, Dexiang

    2018-03-01

    Classical image perceptual quality assessment models usually resort to natural scene statistic methods, which are based on an assumption that certain reliable statistical regularities hold on undistorted images and will be corrupted by introduced distortions. However, these models usually fail to accurately predict degradation severity of images in realistic scenarios since complex, multiple, and interactive authentic distortions usually appear on them. We propose a quality prediction model based on convolutional neural network. Quality-aware features extracted from filter banks of multiple convolutional layers are aggregated into the image representation. Furthermore, an easy-to-implement and effective feature selection strategy is used to further refine the image representation and finally a linear support vector regression model is trained to map image representation into images' subjective perceptual quality scores. The experimental results on benchmark databases present the effectiveness and generalizability of the proposed model.

  7. Plaque echodensity and textural features are associated with histologic carotid plaque instability.

    PubMed

    Doonan, Robert J; Gorgui, Jessica; Veinot, Jean P; Lai, Chi; Kyriacou, Efthyvoulos; Corriveau, Marc M; Steinmetz, Oren K; Daskalopoulou, Stella S

    2016-09-01

    Carotid plaque echodensity and texture features predict cerebrovascular symptomatology. Our purpose was to determine the association of echodensity and textural features obtained from a digital image analysis (DIA) program with histologic features of plaque instability as well as to identify the specific morphologic characteristics of unstable plaques. Patients scheduled to undergo carotid endarterectomy were recruited and underwent carotid ultrasound imaging. DIA was performed to extract echodensity and textural features using Plaque Texture Analysis software (LifeQ Medical Ltd, Nicosia, Cyprus). Carotid plaque surgical specimens were obtained and analyzed histologically. Principal component analysis (PCA) was performed to reduce imaging variables. Logistic regression models were used to determine if PCA variables and individual imaging variables predicted histologic features of plaque instability. Image analysis data from 160 patients were analyzed. Individual imaging features of plaque echolucency and homogeneity were associated with a more unstable plaque phenotype on histology. These results were independent of age, sex, and degree of carotid stenosis. PCA reduced 39 individual imaging variables to five PCA variables. PCA1 and PCA2 were significantly associated with overall plaque instability on histology (both P = .02), whereas PCA3 did not achieve statistical significance (P = .07). DIA features of carotid plaques are associated with histologic plaque instability as assessed by multiple histologic features. Importantly, unstable plaques on histology appear more echolucent and homogeneous on ultrasound imaging. These results are independent of stenosis, suggesting that image analysis may have a role in refining the selection of patients who undergo carotid endarterectomy. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  8. Predicting pork loin intramuscular fat using computer vision system.

    PubMed

    Liu, J-H; Sun, X; Young, J M; Bachmeier, L A; Newman, D J

    2018-09-01

    The objective of this study was to investigate the ability of computer vision system to predict pork intramuscular fat percentage (IMF%). Center-cut loin samples (n = 85) were trimmed of subcutaneous fat and connective tissue. Images were acquired and pixels were segregated to estimate image IMF% and 18 image color features for each image. Subjective IMF% was determined by a trained grader. Ether extract IMF% was calculated using ether extract method. Image color features and image IMF% were used as predictors for stepwise regression and support vector machine models. Results showed that subjective IMF% had a correlation of 0.81 with ether extract IMF% while the image IMF% had a 0.66 correlation with ether extract IMF%. Accuracy rates for regression models were 0.63 for stepwise and 0.75 for support vector machine. Although subjective IMF% has shown to have better prediction, results from computer vision system demonstrates the potential of being used as a tool in predicting pork IMF% in the future. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Quantification of photoacoustic microscopy images for ovarian cancer detection

    NASA Astrophysics Data System (ADS)

    Wang, Tianheng; Yang, Yi; Alqasemi, Umar; Kumavor, Patrick D.; Wang, Xiaohong; Sanders, Melinda; Brewer, Molly; Zhu, Quing

    2014-03-01

    In this paper, human ovarian tissues with malignant and benign features were imaged ex vivo by using an opticalresolution photoacoustic microscopy (OR-PAM) system. Several features were quantitatively extracted from PAM images to describe photoacoustic signal distributions and fluctuations. 106 PAM images from 18 human ovaries were classified by applying those extracted features to a logistic prediction model. 57 images from 9 ovaries were used as a training set to train the logistic model, and 49 images from another 9 ovaries were used to test our prediction model. We assumed that if one image from one malignant ovary was classified as malignant, it is sufficient to classify this ovary as malignant. For the training set, we achieved 100% sensitivity and 83.3% specificity; for testing set, we achieved 100% sensitivity and 66.7% specificity. These preliminary results demonstrate that PAM could be extremely valuable in assisting and guiding surgeons for in vivo evaluation of ovarian tissue.

  10. Quantitative radiomics studies for tissue characterization: a review of technology and methodological procedures.

    PubMed

    Larue, Ruben T H M; Defraene, Gilles; De Ruysscher, Dirk; Lambin, Philippe; van Elmpt, Wouter

    2017-02-01

    Quantitative analysis of tumour characteristics based on medical imaging is an emerging field of research. In recent years, quantitative imaging features derived from CT, positron emission tomography and MR scans were shown to be of added value in the prediction of outcome parameters in oncology, in what is called the radiomics field. However, results might be difficult to compare owing to a lack of standardized methodologies to conduct quantitative image analyses. In this review, we aim to present an overview of the current challenges, technical routines and protocols that are involved in quantitative imaging studies. The first issue that should be overcome is the dependency of several features on the scan acquisition and image reconstruction parameters. Adopting consistent methods in the subsequent target segmentation step is evenly crucial. To further establish robust quantitative image analyses, standardization or at least calibration of imaging features based on different feature extraction settings is required, especially for texture- and filter-based features. Several open-source and commercial software packages to perform feature extraction are currently available, all with slightly different functionalities, which makes benchmarking quite challenging. The number of imaging features calculated is typically larger than the number of patients studied, which emphasizes the importance of proper feature selection and prediction model-building routines to prevent overfitting. Even though many of these challenges still need to be addressed before quantitative imaging can be brought into daily clinical practice, radiomics is expected to be a critical component for the integration of image-derived information to personalize treatment in the future.

  11. Quantitative imaging features of pretreatment CT predict volumetric response to chemotherapy in patients with colorectal liver metastases.

    PubMed

    Creasy, John M; Midya, Abhishek; Chakraborty, Jayasree; Adams, Lauryn B; Gomes, Camilla; Gonen, Mithat; Seastedt, Kenneth P; Sutton, Elizabeth J; Cercek, Andrea; Kemeny, Nancy E; Shia, Jinru; Balachandran, Vinod P; Kingham, T Peter; Allen, Peter J; DeMatteo, Ronald P; Jarnagin, William R; D'Angelica, Michael I; Do, Richard K G; Simpson, Amber L

    2018-06-19

    This study investigates whether quantitative image analysis of pretreatment CT scans can predict volumetric response to chemotherapy for patients with colorectal liver metastases (CRLM). Patients treated with chemotherapy for CRLM (hepatic artery infusion (HAI) combined with systemic or systemic alone) were included in the study. Patients were imaged at baseline and approximately 8 weeks after treatment. Response was measured as the percentage change in tumour volume from baseline. Quantitative imaging features were derived from the index hepatic tumour on pretreatment CT, and features statistically significant on univariate analysis were included in a linear regression model to predict volumetric response. The regression model was constructed from 70% of data, while 30% were reserved for testing. Test data were input into the trained model. Model performance was evaluated with mean absolute prediction error (MAPE) and R 2 . Clinicopatholologic factors were assessed for correlation with response. 157 patients were included, split into training (n = 110) and validation (n = 47) sets. MAPE from the multivariate linear regression model was 16.5% (R 2 = 0.774) and 21.5% in the training and validation sets, respectively. Stratified by HAI utilisation, MAPE in the validation set was 19.6% for HAI and 25.1% for systemic chemotherapy alone. Clinical factors associated with differences in median tumour response were treatment strategy, systemic chemotherapy regimen, age and KRAS mutation status (p < 0.05). Quantitative imaging features extracted from pretreatment CT are promising predictors of volumetric response to chemotherapy in patients with CRLM. Pretreatment predictors of response have the potential to better select patients for specific therapies. • Colorectal liver metastases (CRLM) are downsized with chemotherapy but predicting the patients that will respond to chemotherapy is currently not possible. • Heterogeneity and enhancement patterns of CRLM can be measured with quantitative imaging. • Prediction model constructed that predicts volumetric response with 20% error suggesting that quantitative imaging holds promise to better select patients for specific treatments.

  12. Model-based vision for space applications

    NASA Technical Reports Server (NTRS)

    Chaconas, Karen; Nashman, Marilyn; Lumia, Ronald

    1992-01-01

    This paper describes a method for tracking moving image features by combining spatial and temporal edge information with model based feature information. The algorithm updates the two-dimensional position of object features by correlating predicted model features with current image data. The results of the correlation process are used to compute an updated model. The algorithm makes use of a high temporal sampling rate with respect to spatial changes of the image features and operates in a real-time multiprocessing environment. Preliminary results demonstrate successful tracking for image feature velocities between 1.1 and 4.5 pixels every image frame. This work has applications for docking, assembly, retrieval of floating objects and a host of other space-related tasks.

  13. Deep learning for tissue microarray image-based outcome prediction in patients with colorectal cancer

    NASA Astrophysics Data System (ADS)

    Bychkov, Dmitrii; Turkki, Riku; Haglund, Caj; Linder, Nina; Lundin, Johan

    2016-03-01

    Recent advances in computer vision enable increasingly accurate automated pattern classification. In the current study we evaluate whether a convolutional neural network (CNN) can be trained to predict disease outcome in patients with colorectal cancer based on images of tumor tissue microarray samples. We compare the prognostic accuracy of CNN features extracted from the whole, unsegmented tissue microarray spot image, with that of CNN features extracted from the epithelial and non-epithelial compartments, respectively. The prognostic accuracy of visually assessed histologic grade is used as a reference. The image data set consists of digitized hematoxylin-eosin (H and E) stained tissue microarray samples obtained from 180 patients with colorectal cancer. The patient samples represent a variety of histological grades, have data available on a series of clinicopathological variables including long-term outcome and ground truth annotations performed by experts. The CNN features extracted from images of the epithelial tissue compartment significantly predicted outcome (hazard ratio (HR) 2.08; CI95% 1.04-4.16; area under the curve (AUC) 0.66) in a test set of 60 patients, as compared to the CNN features extracted from unsegmented images (HR 1.67; CI95% 0.84-3.31, AUC 0.57) and visually assessed histologic grade (HR 1.96; CI95% 0.99-3.88, AUC 0.61). As a conclusion, a deep-learning classifier can be trained to predict outcome of colorectal cancer based on images of H and E stained tissue microarray samples and the CNN features extracted from the epithelial compartment only resulted in a prognostic discrimination comparable to that of visually determined histologic grade.

  14. MRI texture features as biomarkers to predict MGMT methylation status in glioblastomas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korfiatis, Panagiotis; Kline, Timothy L.; Erickson, Bradley J., E-mail: bje@mayo.edu

    Purpose: Imaging biomarker research focuses on discovering relationships between radiological features and histological findings. In glioblastoma patients, methylation of the O{sup 6}-methylguanine methyltransferase (MGMT) gene promoter is positively correlated with an increased effectiveness of current standard of care. In this paper, the authors investigate texture features as potential imaging biomarkers for capturing the MGMT methylation status of glioblastoma multiforme (GBM) tumors when combined with supervised classification schemes. Methods: A retrospective study of 155 GBM patients with known MGMT methylation status was conducted. Co-occurrence and run length texture features were calculated, and both support vector machines (SVMs) and random forest classifiersmore » were used to predict MGMT methylation status. Results: The best classification system (an SVM-based classifier) had a maximum area under the receiver-operating characteristic (ROC) curve of 0.85 (95% CI: 0.78–0.91) using four texture features (correlation, energy, entropy, and local intensity) originating from the T2-weighted images, yielding at the optimal threshold of the ROC curve, a sensitivity of 0.803 and a specificity of 0.813. Conclusions: Results show that supervised machine learning of MRI texture features can predict MGMT methylation status in preoperative GBM tumors, thus providing a new noninvasive imaging biomarker.« less

  15. Improving lung cancer prognosis assessment by incorporating synthetic minority oversampling technique and score fusion method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Shiju; Qian, Wei; Guan, Yubao

    2016-06-15

    Purpose: This study aims to investigate the potential to improve lung cancer recurrence risk prediction performance for stage I NSCLS patients by integrating oversampling, feature selection, and score fusion techniques and develop an optimal prediction model. Methods: A dataset involving 94 early stage lung cancer patients was retrospectively assembled, which includes CT images, nine clinical and biological (CB) markers, and outcome of 3-yr disease-free survival (DFS) after surgery. Among the 94 patients, 74 remained DFS and 20 had cancer recurrence. Applying a computer-aided detection scheme, tumors were segmented from the CT images and 35 quantitative image (QI) features were initiallymore » computed. Two normalized Gaussian radial basis function network (RBFN) based classifiers were built based on QI features and CB markers separately. To improve prediction performance, the authors applied a synthetic minority oversampling technique (SMOTE) and a BestFirst based feature selection method to optimize the classifiers and also tested fusion methods to combine QI and CB based prediction results. Results: Using a leave-one-case-out cross-validation (K-fold cross-validation) method, the computed areas under a receiver operating characteristic curve (AUCs) were 0.716 ± 0.071 and 0.642 ± 0.061, when using the QI and CB based classifiers, respectively. By fusion of the scores generated by the two classifiers, AUC significantly increased to 0.859 ± 0.052 (p < 0.05) with an overall prediction accuracy of 89.4%. Conclusions: This study demonstrated the feasibility of improving prediction performance by integrating SMOTE, feature selection, and score fusion techniques. Combining QI features and CB markers and performing SMOTE prior to feature selection in classifier training enabled RBFN based classifier to yield improved prediction accuracy.« less

  16. Predicting Response to Neoadjuvant Chemotherapy with PET Imaging Using Convolutional Neural Networks

    PubMed Central

    Ypsilantis, Petros-Pavlos; Siddique, Musib; Sohn, Hyon-Mok; Davies, Andrew; Cook, Gary; Goh, Vicky; Montana, Giovanni

    2015-01-01

    Imaging of cancer with 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) has become a standard component of diagnosis and staging in oncology, and is becoming more important as a quantitative monitor of individual response to therapy. In this article we investigate the challenging problem of predicting a patient’s response to neoadjuvant chemotherapy from a single 18F-FDG PET scan taken prior to treatment. We take a “radiomics” approach whereby a large amount of quantitative features is automatically extracted from pretherapy PET images in order to build a comprehensive quantification of the tumor phenotype. While the dominant methodology relies on hand-crafted texture features, we explore the potential of automatically learning low- to high-level features directly from PET scans. We report on a study that compares the performance of two competing radiomics strategies: an approach based on state-of-the-art statistical classifiers using over 100 quantitative imaging descriptors, including texture features as well as standardized uptake values, and a convolutional neural network, 3S-CNN, trained directly from PET scans by taking sets of adjacent intra-tumor slices. Our experimental results, based on a sample of 107 patients with esophageal cancer, provide initial evidence that convolutional neural networks have the potential to extract PET imaging representations that are highly predictive of response to therapy. On this dataset, 3S-CNN achieves an average 80.7% sensitivity and 81.6% specificity in predicting non-responders, and outperforms other competing predictive models. PMID:26355298

  17. Using multiscale texture and density features for near-term breast cancer risk analysis

    PubMed Central

    Sun, Wenqing; Tseng, Tzu-Liang (Bill); Qian, Wei; Zhang, Jianying; Saltzstein, Edward C.; Zheng, Bin; Lure, Fleming; Yu, Hui; Zhou, Shi

    2015-01-01

    Purpose: To help improve efficacy of screening mammography by eventually establishing a new optimal personalized screening paradigm, the authors investigated the potential of using the quantitative multiscale texture and density feature analysis of digital mammograms to predict near-term breast cancer risk. Methods: The authors’ dataset includes digital mammograms acquired from 340 women. Among them, 141 were positive and 199 were negative/benign cases. The negative digital mammograms acquired from the “prior” screening examinations were used in the study. Based on the intensity value distributions, five subregions at different scales were extracted from each mammogram. Five groups of features, including density and texture features, were developed and calculated on every one of the subregions. Sequential forward floating selection was used to search for the effective combinations. Using the selected features, a support vector machine (SVM) was optimized using a tenfold validation method to predict the risk of each woman having image-detectable cancer in the next sequential mammography screening. The area under the receiver operating characteristic curve (AUC) was used as the performance assessment index. Results: From a total number of 765 features computed from multiscale subregions, an optimal feature set of 12 features was selected. Applying this feature set, a SVM classifier yielded performance of AUC = 0.729 ± 0.021. The positive predictive value was 0.657 (92 of 140) and the negative predictive value was 0.755 (151 of 200). Conclusions: The study results demonstrated a moderately high positive association between risk prediction scores generated by the quantitative multiscale mammographic image feature analysis and the actual risk of a woman having an image-detectable breast cancer in the next subsequent examinations. PMID:26127038

  18. Referenceless perceptual fog density prediction model

    NASA Astrophysics Data System (ADS)

    Choi, Lark Kwon; You, Jaehee; Bovik, Alan C.

    2014-02-01

    We propose a perceptual fog density prediction model based on natural scene statistics (NSS) and "fog aware" statistical features, which can predict the visibility in a foggy scene from a single image without reference to a corresponding fogless image, without side geographical camera information, without training on human-rated judgments, and without dependency on salient objects such as lane markings or traffic signs. The proposed fog density predictor only makes use of measurable deviations from statistical regularities observed in natural foggy and fog-free images. A fog aware collection of statistical features is derived from a corpus of foggy and fog-free images by using a space domain NSS model and observed characteristics of foggy images such as low contrast, faint color, and shifted intensity. The proposed model not only predicts perceptual fog density for the entire image but also provides a local fog density index for each patch. The predicted fog density of the model correlates well with the measured visibility in a foggy scene as measured by judgments taken in a human subjective study on a large foggy image database. As one application, the proposed model accurately evaluates the performance of defog algorithms designed to enhance the visibility of foggy images.

  19. Respiratory trace feature analysis for the prediction of respiratory-gated PET quantification.

    PubMed

    Wang, Shouyi; Bowen, Stephen R; Chaovalitwongse, W Art; Sandison, George A; Grabowski, Thomas J; Kinahan, Paul E

    2014-02-21

    The benefits of respiratory gating in quantitative PET/CT vary tremendously between individual patients. Respiratory pattern is among many patient-specific characteristics that are thought to play an important role in gating-induced imaging improvements. However, the quantitative relationship between patient-specific characteristics of respiratory pattern and improvements in quantitative accuracy from respiratory-gated PET/CT has not been well established. If such a relationship could be estimated, then patient-specific respiratory patterns could be used to prospectively select appropriate motion compensation during image acquisition on a per-patient basis. This study was undertaken to develop a novel statistical model that predicts quantitative changes in PET/CT imaging due to respiratory gating. Free-breathing static FDG-PET images without gating and respiratory-gated FDG-PET images were collected from 22 lung and liver cancer patients on a PET/CT scanner. PET imaging quality was quantified with peak standardized uptake value (SUV(peak)) over lesions of interest. Relative differences in SUV(peak) between static and gated PET images were calculated to indicate quantitative imaging changes due to gating. A comprehensive multidimensional extraction of the morphological and statistical characteristics of respiratory patterns was conducted, resulting in 16 features that characterize representative patterns of a single respiratory trace. The six most informative features were subsequently extracted using a stepwise feature selection approach. The multiple-regression model was trained and tested based on a leave-one-subject-out cross-validation. The predicted quantitative improvements in PET imaging achieved an accuracy higher than 90% using a criterion with a dynamic error-tolerance range for SUV(peak) values. The results of this study suggest that our prediction framework could be applied to determine which patients would likely benefit from respiratory motion compensation when clinicians quantitatively assess PET/CT for therapy target definition and response assessment.

  20. Respiratory trace feature analysis for the prediction of respiratory-gated PET quantification

    NASA Astrophysics Data System (ADS)

    Wang, Shouyi; Bowen, Stephen R.; Chaovalitwongse, W. Art; Sandison, George A.; Grabowski, Thomas J.; Kinahan, Paul E.

    2014-02-01

    The benefits of respiratory gating in quantitative PET/CT vary tremendously between individual patients. Respiratory pattern is among many patient-specific characteristics that are thought to play an important role in gating-induced imaging improvements. However, the quantitative relationship between patient-specific characteristics of respiratory pattern and improvements in quantitative accuracy from respiratory-gated PET/CT has not been well established. If such a relationship could be estimated, then patient-specific respiratory patterns could be used to prospectively select appropriate motion compensation during image acquisition on a per-patient basis. This study was undertaken to develop a novel statistical model that predicts quantitative changes in PET/CT imaging due to respiratory gating. Free-breathing static FDG-PET images without gating and respiratory-gated FDG-PET images were collected from 22 lung and liver cancer patients on a PET/CT scanner. PET imaging quality was quantified with peak standardized uptake value (SUVpeak) over lesions of interest. Relative differences in SUVpeak between static and gated PET images were calculated to indicate quantitative imaging changes due to gating. A comprehensive multidimensional extraction of the morphological and statistical characteristics of respiratory patterns was conducted, resulting in 16 features that characterize representative patterns of a single respiratory trace. The six most informative features were subsequently extracted using a stepwise feature selection approach. The multiple-regression model was trained and tested based on a leave-one-subject-out cross-validation. The predicted quantitative improvements in PET imaging achieved an accuracy higher than 90% using a criterion with a dynamic error-tolerance range for SUVpeak values. The results of this study suggest that our prediction framework could be applied to determine which patients would likely benefit from respiratory motion compensation when clinicians quantitatively assess PET/CT for therapy target definition and response assessment.

  1. Saliency Detection of Stereoscopic 3D Images with Application to Visual Discomfort Prediction

    NASA Astrophysics Data System (ADS)

    Li, Hong; Luo, Ting; Xu, Haiyong

    2017-06-01

    Visual saliency detection is potentially useful for a wide range of applications in image processing and computer vision fields. This paper proposes a novel bottom-up saliency detection approach for stereoscopic 3D (S3D) images based on regional covariance matrix. As for S3D saliency detection, besides the traditional 2D low-level visual features, additional 3D depth features should also be considered. However, only limited efforts have been made to investigate how different features (e.g. 2D and 3D features) contribute to the overall saliency of S3D images. The main contribution of this paper is that we introduce a nonlinear feature integration descriptor, i.e., regional covariance matrix, to fuse both 2D and 3D features for S3D saliency detection. The regional covariance matrix is shown to be effective for nonlinear feature integration by modelling the inter-correlation of different feature dimensions. Experimental results demonstrate that the proposed approach outperforms several existing relevant models including 2D extended and pure 3D saliency models. In addition, we also experimentally verified that the proposed S3D saliency map can significantly improve the prediction accuracy of experienced visual discomfort when viewing S3D images.

  2. Histological Image Feature Mining Reveals Emergent Diagnostic Properties for Renal Cancer

    PubMed Central

    Kothari, Sonal; Phan, John H.; Young, Andrew N.; Wang, May D.

    2016-01-01

    Computer-aided histological image classification systems are important for making objective and timely cancer diagnostic decisions. These systems use combinations of image features that quantify a variety of image properties. Because researchers tend to validate their diagnostic systems on specific cancer endpoints, it is difficult to predict which image features will perform well given a new cancer endpoint. In this paper, we define a comprehensive set of common image features (consisting of 12 distinct feature subsets) that quantify a variety of image properties. We use a data-mining approach to determine which feature subsets and image properties emerge as part of an “optimal” diagnostic model when applied to specific cancer endpoints. Our goal is to assess the performance of such comprehensive image feature sets for application to a wide variety of diagnostic problems. We perform this study on 12 endpoints including 6 renal tumor subtype endpoints and 6 renal cancer grade endpoints. Keywords-histology, image mining, computer-aided diagnosis PMID:28163980

  3. Predictive maps for Juno perijoves and identification of significant features

    NASA Astrophysics Data System (ADS)

    Rogers, J. H.; Adamoli, G.; Jacquesson, M.; Vedovato, M.; Mettig, H.-J.; Eichstädt, G.; Caplinger, M.; Momary, T. W.; Orton, G. S.; Tabataba-Vakili, F.; Hansen, C. J.

    2017-09-01

    At each Juno perijove, JunoCam takes hi-res images of selected latitudes along the sub-spacecraft track, as determined by public voting. To inform this target election process, we use the continuous coverage of Jupiter's visible clouds by amateur imaging, and the tracking of features from those images by the JUPOS project, to identify the features which are expected to be visible at the upcoming perijove. We produce a predictive map for each perijove, and subsequently annotate the JunoCam images to locate the known jets and circulation. Up to perijove 5, this collaboration has contributed to hi-res imaging of several long-lived circulations in northern and southern hemispheres, of major new convective outbreaks in the North and South Equatorial Belts, and of the North Temperate Belt maturing after a cyclic outbreak.

  4. Computer vision system for egg volume prediction using backpropagation neural network

    NASA Astrophysics Data System (ADS)

    Siswantoro, J.; Hilman, M. Y.; Widiasri, M.

    2017-11-01

    Volume is one of considered aspects in egg sorting process. A rapid and accurate volume measurement method is needed to develop an egg sorting system. Computer vision system (CVS) provides a promising solution for volume measurement problem. Artificial neural network (ANN) has been used to predict the volume of egg in several CVSs. However, volume prediction from ANN could have less accuracy due to inappropriate input features or inappropriate ANN structure. This paper proposes a CVS for predicting the volume of egg using ANN. The CVS acquired an image of egg from top view and then processed the image to extract its 1D and 2 D size features. The features were used as input for ANN in predicting the volume of egg. The experiment results show that the proposed CSV can predict the volume of egg with a good accuracy and less computation time.

  5. Quantitative radiomic profiling of glioblastoma represents transcriptomic expression.

    PubMed

    Kong, Doo-Sik; Kim, Junhyung; Ryu, Gyuha; You, Hye-Jin; Sung, Joon Kyung; Han, Yong Hee; Shin, Hye-Mi; Lee, In-Hee; Kim, Sung-Tae; Park, Chul-Kee; Choi, Seung Hong; Choi, Jeong Won; Seol, Ho Jun; Lee, Jung-Il; Nam, Do-Hyun

    2018-01-19

    Quantitative imaging biomarkers have increasingly emerged in the field of research utilizing available imaging modalities. We aimed to identify good surrogate radiomic features that can represent genetic changes of tumors, thereby establishing noninvasive means for predicting treatment outcome. From May 2012 to June 2014, we retrospectively identified 65 patients with treatment-naïve glioblastoma with available clinical information from the Samsung Medical Center data registry. Preoperative MR imaging data were obtained for all 65 patients with primary glioblastoma. A total of 82 imaging features including first-order statistics, volume, and size features, were semi-automatically extracted from structural and physiologic images such as apparent diffusion coefficient and perfusion images. Using commercially available software, NordicICE, we performed quantitative imaging analysis and collected the dataset composed of radiophenotypic parameters. Unsupervised clustering methods revealed that the radiophenotypic dataset was composed of three clusters. Each cluster represented a distinct molecular classification of glioblastoma; classical type, proneural and neural types, and mesenchymal type. These clusters also reflected differential clinical outcomes. We found that extracted imaging signatures does not represent copy number variation and somatic mutation. Quantitative radiomic features provide a potential evidence to predict molecular phenotype and treatment outcome. Radiomic profiles represents transcriptomic phenotypes more well.

  6. Applying quantitative adiposity feature analysis models to predict benefit of bevacizumab-based chemotherapy in ovarian cancer patients

    NASA Astrophysics Data System (ADS)

    Wang, Yunzhi; Qiu, Yuchen; Thai, Theresa; More, Kathleen; Ding, Kai; Liu, Hong; Zheng, Bin

    2016-03-01

    How to rationally identify epithelial ovarian cancer (EOC) patients who will benefit from bevacizumab or other antiangiogenic therapies is a critical issue in EOC treatments. The motivation of this study is to quantitatively measure adiposity features from CT images and investigate the feasibility of predicting potential benefit of EOC patients with or without receiving bevacizumab-based chemotherapy treatment using multivariate statistical models built based on quantitative adiposity image features. A dataset involving CT images from 59 advanced EOC patients were included. Among them, 32 patients received maintenance bevacizumab after primary chemotherapy and the remaining 27 patients did not. We developed a computer-aided detection (CAD) scheme to automatically segment subcutaneous fat areas (VFA) and visceral fat areas (SFA) and then extracted 7 adiposity-related quantitative features. Three multivariate data analysis models (linear regression, logistic regression and Cox proportional hazards regression) were performed respectively to investigate the potential association between the model-generated prediction results and the patients' progression-free survival (PFS) and overall survival (OS). The results show that using all 3 statistical models, a statistically significant association was detected between the model-generated results and both of the two clinical outcomes in the group of patients receiving maintenance bevacizumab (p<0.01), while there were no significant association for both PFS and OS in the group of patients without receiving maintenance bevacizumab. Therefore, this study demonstrated the feasibility of using quantitative adiposity-related CT image features based statistical prediction models to generate a new clinical marker and predict the clinical outcome of EOC patients receiving maintenance bevacizumab-based chemotherapy.

  7. Fractal and Gray Level Cooccurrence Matrix Computational Analysis of Primary Osteosarcoma Magnetic Resonance Images Predicts the Chemotherapy Response.

    PubMed

    Djuričić, Goran J; Radulovic, Marko; Sopta, Jelena P; Nikitović, Marina; Milošević, Nebojša T

    2017-01-01

    The prediction of induction chemotherapy response at the time of diagnosis may improve outcomes in osteosarcoma by allowing for personalized tailoring of therapy. The aim of this study was thus to investigate the predictive potential of the so far unexploited computational analysis of osteosarcoma magnetic resonance (MR) images. Fractal and gray level cooccurrence matrix (GLCM) algorithms were employed in retrospective analysis of MR images of primary osteosarcoma localized in distal femur prior to the OsteoSa induction chemotherapy. The predicted and actual chemotherapy response outcomes were then compared by means of receiver operating characteristic (ROC) analysis and accuracy calculation. Dbin, Λ, and SCN were the standard fractal and GLCM features which significantly associated with the chemotherapy outcome, but only in one of the analyzed planes. Our newly developed normalized fractal dimension, called the space-filling ratio (SFR) exerted an independent and much better predictive value with the prediction significance accomplished in two of the three imaging planes, with accuracy of 82% and area under the ROC curve of 0.20 (95% confidence interval 0-0.41). In conclusion, SFR as the newly designed fractal coefficient provided superior predictive performance in comparison to standard image analysis features, presumably by compensating for the tumor size variation in MR images.

  8. Evaluation of correlation between CT image features and ERCC1 protein expression in assessing lung cancer prognosis

    NASA Astrophysics Data System (ADS)

    Tan, Maxine; Emaminejad, Nastaran; Qian, Wei; Sun, Shenshen; Kang, Yan; Guan, Yubao; Lure, Fleming; Zheng, Bin

    2014-03-01

    Stage I non-small-cell lung cancers (NSCLC) usually have favorable prognosis. However, high percentage of NSCLC patients have cancer relapse after surgery. Accurately predicting cancer prognosis is important to optimally treat and manage the patients to minimize the risk of cancer relapse. Studies have shown that an excision repair crosscomplementing 1 (ERCC1) gene was a potentially useful genetic biomarker to predict prognosis of NSCLC patients. Meanwhile, studies also found that chronic obstructive pulmonary disease (COPD) was highly associated with lung cancer prognosis. In this study, we investigated and evaluated the correlations between COPD image features and ERCC1 gene expression. A database involving 106 NSCLC patients was used. Each patient had a thoracic CT examination and ERCC1 genetic test. We applied a computer-aided detection scheme to segment and quantify COPD image features. A logistic regression method and a multilayer perceptron network were applied to analyze the correlation between the computed COPD image features and ERCC1 protein expression. A multilayer perceptron network (MPN) was also developed to test performance of using COPD-related image features to predict ERCC1 protein expression. A nine feature based logistic regression analysis showed the average COPD feature values in the low and high ERCC1 protein expression groups are significantly different (p < 0.01). Using a five-fold cross validation method, the MPN yielded an area under ROC curve (AUC = 0.669±0.053) in classifying between the low and high ERCC1 expression cases. The study indicates that CT phenotype features are associated with the genetic tests, which may provide supplementary information to help improve accuracy in assessing prognosis of NSCLC patients.

  9. The predictive value of magnetic resonance imaging of retinoblastoma for the likelihood of high-risk pathologic features.

    PubMed

    Hiasat, Jamila G; Saleh, Alaa; Al-Hussaini, Maysa; Al Nawaiseh, Ibrahim; Mehyar, Mustafa; Qandeel, Monther; Mohammad, Mona; Deebajah, Rasha; Sultan, Iyad; Jaradat, Imad; Mansour, Asem; Yousef, Yacoub A

    2018-06-01

    To evaluate the predictive value of magnetic resonance imaging in retinoblastoma for the likelihood of high-risk pathologic features. A retrospective study of 64 eyes enucleated from 60 retinoblastoma patients. Contrast-enhanced magnetic resonance imaging was performed before enucleation. Main outcome measures included demographics, laterality, accuracy, sensitivity, and specificity of magnetic resonance imaging in detecting high-risk pathologic features. Optic nerve invasion and choroidal invasion were seen microscopically in 34 (53%) and 28 (44%) eyes, respectively, while they were detected in magnetic resonance imaging in 22 (34%) and 15 (23%) eyes, respectively. The accuracy of magnetic resonance imaging in detecting prelaminar invasion was 77% (sensitivity 89%, specificity 98%), 56% for laminar invasion (sensitivity 27%, specificity 94%), 84% for postlaminar invasion (sensitivity 42%, specificity 98%), and 100% for optic cut edge invasion (sensitivity100%, specificity 100%). The accuracy of magnetic resonance imaging in detecting focal choroidal invasion was 48% (sensitivity 33%, specificity 97%), and 84% for massive choroidal invasion (sensitivity 53%, specificity 98%), and the accuracy in detecting extrascleral extension was 96% (sensitivity 67%, specificity 98%). Magnetic resonance imaging should not be the only method to stratify patients at high risk from those who are not, eventhough it can predict with high accuracy extensive postlaminar optic nerve invasion, massive choroidal invasion, and extrascleral tumor extension.

  10. Natural image classification driven by human brain activity

    NASA Astrophysics Data System (ADS)

    Zhang, Dai; Peng, Hanyang; Wang, Jinqiao; Tang, Ming; Xue, Rong; Zuo, Zhentao

    2016-03-01

    Natural image classification has been a hot topic in computer vision and pattern recognition research field. Since the performance of an image classification system can be improved by feature selection, many image feature selection methods have been developed. However, the existing supervised feature selection methods are typically driven by the class label information that are identical for different samples from the same class, ignoring with-in class image variability and therefore degrading the feature selection performance. In this study, we propose a novel feature selection method, driven by human brain activity signals collected using fMRI technique when human subjects were viewing natural images of different categories. The fMRI signals associated with subjects viewing different images encode the human perception of natural images, and therefore may capture image variability within- and cross- categories. We then select image features with the guidance of fMRI signals from brain regions with active response to image viewing. Particularly, bag of words features based on GIST descriptor are extracted from natural images for classification, and a sparse regression base feature selection method is adapted to select image features that can best predict fMRI signals. Finally, a classification model is built on the select image features to classify images without fMRI signals. The validation experiments for classifying images from 4 categories of two subjects have demonstrated that our method could achieve much better classification performance than the classifiers built on image feature selected by traditional feature selection methods.

  11. No-reference image quality assessment based on natural scene statistics and gradient magnitude similarity

    NASA Astrophysics Data System (ADS)

    Jia, Huizhen; Sun, Quansen; Ji, Zexuan; Wang, Tonghan; Chen, Qiang

    2014-11-01

    The goal of no-reference/blind image quality assessment (NR-IQA) is to devise a perceptual model that can accurately predict the quality of a distorted image as human opinions, in which feature extraction is an important issue. However, the features used in the state-of-the-art "general purpose" NR-IQA algorithms are usually natural scene statistics (NSS) based or are perceptually relevant; therefore, the performance of these models is limited. To further improve the performance of NR-IQA, we propose a general purpose NR-IQA algorithm which combines NSS-based features with perceptually relevant features. The new method extracts features in both the spatial and gradient domains. In the spatial domain, we extract the point-wise statistics for single pixel values which are characterized by a generalized Gaussian distribution model to form the underlying features. In the gradient domain, statistical features based on neighboring gradient magnitude similarity are extracted. Then a mapping is learned to predict quality scores using a support vector regression. The experimental results on the benchmark image databases demonstrate that the proposed algorithm correlates highly with human judgments of quality and leads to significant performance improvements over state-of-the-art methods.

  12. Multifractal modeling, segmentation, prediction, and statistical validation of posterior fossa tumors

    NASA Astrophysics Data System (ADS)

    Islam, Atiq; Iftekharuddin, Khan M.; Ogg, Robert J.; Laningham, Fred H.; Sivakumar, Bhuvaneswari

    2008-03-01

    In this paper, we characterize the tumor texture in pediatric brain magnetic resonance images (MRIs) and exploit these features for automatic segmentation of posterior fossa (PF) tumors. We focus on PF tumor because of the prevalence of such tumor in pediatric patients. Due to varying appearance in MRI, we propose to model the tumor texture with a multi-fractal process, such as a multi-fractional Brownian motion (mBm). In mBm, the time-varying Holder exponent provides flexibility in modeling irregular tumor texture. We develop a detailed mathematical framework for mBm in two-dimension and propose a novel algorithm to estimate the multi-fractal structure of tissue texture in brain MRI based on wavelet coefficients. This wavelet based multi-fractal feature along with MR image intensity and a regular fractal feature obtained using our existing piecewise-triangular-prism-surface-area (PTPSA) method, are fused in segmenting PF tumor and non-tumor regions in brain T1, T2, and FLAIR MR images respectively. We also demonstrate a non-patient-specific automated tumor prediction scheme based on these image features. We experimentally show the tumor discriminating power of our novel multi-fractal texture along with intensity and fractal features in automated tumor segmentation and statistical prediction. To evaluate the performance of our tumor prediction scheme, we obtain ROCs and demonstrate how sharply the curves reach the specificity of 1.0 sacrificing minimal sensitivity. Experimental results show the effectiveness of our proposed techniques in automatic detection of PF tumors in pediatric MRIs.

  13. Identification of important image features for pork and turkey ham classification using colour and wavelet texture features and genetic selection.

    PubMed

    Jackman, Patrick; Sun, Da-Wen; Allen, Paul; Valous, Nektarios A; Mendoza, Fernando; Ward, Paddy

    2010-04-01

    A method to discriminate between various grades of pork and turkey ham was developed using colour and wavelet texture features. Image analysis methods originally developed for predicting the palatability of beef were applied to rapidly identify the ham grade. With high quality digital images of 50-94 slices per ham it was possible to identify the greyscale that best expressed the differences between the various ham grades. The best 10 discriminating image features were then found with a genetic algorithm. Using the best 10 image features, simple linear discriminant analysis models produced 100% correct classifications for both pork and turkey on both calibration and validation sets. 2009 Elsevier Ltd. All rights reserved.

  14. In vivo placental MRI shape and textural features predict fetal growth restriction and postnatal outcome.

    PubMed

    Dahdouh, Sonia; Andescavage, Nickie; Yewale, Sayali; Yarish, Alexa; Lanham, Diane; Bulas, Dorothy; du Plessis, Adre J; Limperopoulos, Catherine

    2018-02-01

    To investigate the ability of three-dimensional (3D) MRI placental shape and textural features to predict fetal growth restriction (FGR) and birth weight (BW) for both healthy and FGR fetuses. We recruited two groups of pregnant volunteers between 18 and 39 weeks of gestation; 46 healthy subjects and 34 FGR. Both groups underwent fetal MR imaging on a 1.5 Tesla GE scanner using an eight-channel receiver coil. We acquired T2-weighted images on either the coronal or the axial plane to obtain MR volumes with a slice thickness of either 4 or 8 mm covering the full placenta. Placental shape features (volume, thickness, elongation) were combined with textural features; first order textural features (mean, variance, kurtosis, and skewness of placental gray levels), as well as, textural features computed on the gray level co-occurrence and run-length matrices characterizing placental homogeneity, symmetry, and coarseness. The features were used in two machine learning frameworks to predict FGR and BW. The proposed machine-learning based method using shape and textural features identified FGR pregnancies with 86% accuracy, 77% precision and 86% recall. BW estimations were 0.3 ± 13.4% (mean percentage error ± standard error) for healthy fetuses and -2.6 ± 15.9% for FGR. The proposed FGR identification and BW estimation methods using in utero placental shape and textural features computed on 3D MR images demonstrated high accuracy in our healthy and high-risk cohorts. Future studies to assess the evolution of each feature with regard to placental development are currently underway. 2 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:449-458. © 2017 International Society for Magnetic Resonance in Medicine.

  15. Clinical- and imaging-based prediction of stroke risk after transient ischemic attack: the CIP model.

    PubMed

    Ay, Hakan; Arsava, E Murat; Johnston, S Claiborne; Vangel, Mark; Schwamm, Lee H; Furie, Karen L; Koroshetz, Walter J; Sorensen, A Gregory

    2009-01-01

    Predictive instruments based on clinical features for early stroke risk after transient ischemic attack suffer from limited specificity. We sought to combine imaging and clinical features to improve predictions for 7-day stroke risk after transient ischemic attack. We studied 601 consecutive patients with transient ischemic attack who had MRI within 24 hours of symptom onset. A logistic regression model was developed using stroke within 7 days as the response criterion and diffusion-weighted imaging findings and dichotomized ABCD(2) score (ABCD(2) >/=4) as covariates. Subsequent stroke occurred in 25 patients (5.2%). Dichotomized ABCD(2) score and acute infarct on diffusion-weighted imaging were each independent predictors of stroke risk. The 7-day risk was 0.0% with no predictor, 2.0% with ABCD(2) score >/=4 alone, 4.9% with acute infarct on diffusion-weighted imaging alone, and 14.9% with both predictors (an automated calculator is available at http://cip.martinos.org). Adding imaging increased the area under the receiver operating characteristic curve from 0.66 (95% CI, 0.57 to 0.76) using the ABCD(2) score to 0.81 (95% CI, 0.74 to 0.88; P=0.003). The sensitivity of 80% on the receiver operating characteristic curve corresponded to a specificity of 73% for the CIP model and 47% for the ABCD(2) score. Combining acute imaging findings with clinical transient ischemic attack features causes a dramatic boost in the accuracy of predictions with clinical features alone for early risk of stroke after transient ischemic attack. If validated in relevant clinical settings, risk stratification by the CIP model may assist in early implementation of therapeutic measures and effective use of hospital resources.

  16. Pathological Gleason prediction through gland ring morphometry in immunofluorescent prostate cancer images

    NASA Astrophysics Data System (ADS)

    Scott, Richard; Khan, Faisal M.; Zeineh, Jack; Donovan, Michael; Fernandez, Gerardo

    2016-03-01

    The Gleason score is the most common architectural and morphological assessment of prostate cancer severity and prognosis. There have been numerous quantitative techniques developed to approximate and duplicate the Gleason scoring system. Most of these approaches have been developed in standard H and E brightfield microscopy. Immunofluorescence (IF) image analysis of tissue pathology has recently been proven to be extremely valuable and robust in developing prognostic assessments of disease, particularly in prostate cancer. There have been significant advances in the literature in quantitative biomarker expression as well as characterization of glandular architectures in discrete gland rings. In this work we leverage a new method of segmenting gland rings in IF images for predicting the pathological Gleason; both the clinical and the image specific grade, which may not necessarily be the same. We combine these measures with nuclear specific characteristics as assessed by the MST algorithm. Our individual features correlate well univariately with the Gleason grades, and in a multivariate setting have an accuracy of 85% in predicting the Gleason grade. Additionally, these features correlate strongly with clinical progression outcomes (CI of 0.89), significantly outperforming the clinical Gleason grades (CI of 0.78). This work presents the first assessment of morphological gland unit features from IF images for predicting the Gleason grade.

  17. Multiparametric MRI characterization and prediction in autism spectrum disorder using graph theory and machine learning.

    PubMed

    Zhou, Yongxia; Yu, Fang; Duong, Timothy

    2014-01-01

    This study employed graph theory and machine learning analysis of multiparametric MRI data to improve characterization and prediction in autism spectrum disorders (ASD). Data from 127 children with ASD (13.5±6.0 years) and 153 age- and gender-matched typically developing children (14.5±5.7 years) were selected from the multi-center Functional Connectome Project. Regional gray matter volume and cortical thickness increased, whereas white matter volume decreased in ASD compared to controls. Small-world network analysis of quantitative MRI data demonstrated decreased global efficiency based on gray matter cortical thickness but not with functional connectivity MRI (fcMRI) or volumetry. An integrative model of 22 quantitative imaging features was used for classification and prediction of phenotypic features that included the autism diagnostic observation schedule, the revised autism diagnostic interview, and intelligence quotient scores. Among the 22 imaging features, four (caudate volume, caudate-cortical functional connectivity and inferior frontal gyrus functional connectivity) were found to be highly informative, markedly improving classification and prediction accuracy when compared with the single imaging features. This approach could potentially serve as a biomarker in prognosis, diagnosis, and monitoring disease progression.

  18. Imaging patterns predict patient survival and molecular subtype in glioblastoma via machine learning techniques

    PubMed Central

    Macyszyn, Luke; Akbari, Hamed; Pisapia, Jared M.; Da, Xiao; Attiah, Mark; Pigrish, Vadim; Bi, Yingtao; Pal, Sharmistha; Davuluri, Ramana V.; Roccograndi, Laura; Dahmane, Nadia; Martinez-Lage, Maria; Biros, George; Wolf, Ronald L.; Bilello, Michel; O'Rourke, Donald M.; Davatzikos, Christos

    2016-01-01

    Background MRI characteristics of brain gliomas have been used to predict clinical outcome and molecular tumor characteristics. However, previously reported imaging biomarkers have not been sufficiently accurate or reproducible to enter routine clinical practice and often rely on relatively simple MRI measures. The current study leverages advanced image analysis and machine learning algorithms to identify complex and reproducible imaging patterns predictive of overall survival and molecular subtype in glioblastoma (GB). Methods One hundred five patients with GB were first used to extract approximately 60 diverse features from preoperative multiparametric MRIs. These imaging features were used by a machine learning algorithm to derive imaging predictors of patient survival and molecular subtype. Cross-validation ensured generalizability of these predictors to new patients. Subsequently, the predictors were evaluated in a prospective cohort of 29 new patients. Results Survival curves yielded a hazard ratio of 10.64 for predicted long versus short survivors. The overall, 3-way (long/medium/short survival) accuracy in the prospective cohort approached 80%. Classification of patients into the 4 molecular subtypes of GB achieved 76% accuracy. Conclusions By employing machine learning techniques, we were able to demonstrate that imaging patterns are highly predictive of patient survival. Additionally, we found that GB subtypes have distinctive imaging phenotypes. These results reveal that when imaging markers related to infiltration, cell density, microvascularity, and blood–brain barrier compromise are integrated via advanced pattern analysis methods, they form very accurate predictive biomarkers. These predictive markers used solely preoperative images, hence they can significantly augment diagnosis and treatment of GB patients. PMID:26188015

  19. Applying a machine learning model using a locally preserving projection based feature regeneration algorithm to predict breast cancer risk

    NASA Astrophysics Data System (ADS)

    Heidari, Morteza; Zargari Khuzani, Abolfazl; Danala, Gopichandh; Mirniaharikandehei, Seyedehnafiseh; Qian, Wei; Zheng, Bin

    2018-03-01

    Both conventional and deep machine learning has been used to develop decision-support tools applied in medical imaging informatics. In order to take advantages of both conventional and deep learning approach, this study aims to investigate feasibility of applying a locally preserving projection (LPP) based feature regeneration algorithm to build a new machine learning classifier model to predict short-term breast cancer risk. First, a computer-aided image processing scheme was used to segment and quantify breast fibro-glandular tissue volume. Next, initially computed 44 image features related to the bilateral mammographic tissue density asymmetry were extracted. Then, an LLP-based feature combination method was applied to regenerate a new operational feature vector using a maximal variance approach. Last, a k-nearest neighborhood (KNN) algorithm based machine learning classifier using the LPP-generated new feature vectors was developed to predict breast cancer risk. A testing dataset involving negative mammograms acquired from 500 women was used. Among them, 250 were positive and 250 remained negative in the next subsequent mammography screening. Applying to this dataset, LLP-generated feature vector reduced the number of features from 44 to 4. Using a leave-onecase-out validation method, area under ROC curve produced by the KNN classifier significantly increased from 0.62 to 0.68 (p < 0.05) and odds ratio was 4.60 with a 95% confidence interval of [3.16, 6.70]. Study demonstrated that this new LPP-based feature regeneration approach enabled to produce an optimal feature vector and yield improved performance in assisting to predict risk of women having breast cancer detected in the next subsequent mammography screening.

  20. Predicting Future Morphological Changes of Lesions from Radiotracer Uptake in 18F-FDG-PET Images

    PubMed Central

    Bagci, Ulas; Yao, Jianhua; Miller-Jaster, Kirsten; Chen, Xinjian; Mollura, Daniel J.

    2013-01-01

    We introduce a novel computational framework to enable automated identification of texture and shape features of lesions on 18F-FDG-PET images through a graph-based image segmentation method. The proposed framework predicts future morphological changes of lesions with high accuracy. The presented methodology has several benefits over conventional qualitative and semi-quantitative methods, due to its fully quantitative nature and high accuracy in each step of (i) detection, (ii) segmentation, and (iii) feature extraction. To evaluate our proposed computational framework, thirty patients received 2 18F-FDG-PET scans (60 scans total), at two different time points. Metastatic papillary renal cell carcinoma, cerebellar hemongioblastoma, non-small cell lung cancer, neurofibroma, lymphomatoid granulomatosis, lung neoplasm, neuroendocrine tumor, soft tissue thoracic mass, nonnecrotizing granulomatous inflammation, renal cell carcinoma with papillary and cystic features, diffuse large B-cell lymphoma, metastatic alveolar soft part sarcoma, and small cell lung cancer were included in this analysis. The radiotracer accumulation in patients' scans was automatically detected and segmented by the proposed segmentation algorithm. Delineated regions were used to extract shape and textural features, with the proposed adaptive feature extraction framework, as well as standardized uptake values (SUV) of uptake regions, to conduct a broad quantitative analysis. Evaluation of segmentation results indicates that our proposed segmentation algorithm has a mean dice similarity coefficient of 85.75±1.75%. We found that 28 of 68 extracted imaging features were correlated well with SUVmax (p<0.05), and some of the textural features (such as entropy and maximum probability) were superior in predicting morphological changes of radiotracer uptake regions longitudinally, compared to single intensity feature such as SUVmax. We also found that integrating textural features with SUV measurements significantly improves the prediction accuracy of morphological changes (Spearman correlation coefficient = 0.8715, p<2e-16). PMID:23431398

  1. Quantitative diffusion weighted imaging parameters in tumor and peritumoral stroma for prediction of molecular subtypes in breast cancer

    NASA Astrophysics Data System (ADS)

    He, Ting; Fan, Ming; Zhang, Peng; Li, Hui; Zhang, Juan; Shao, Guoliang; Li, Lihua

    2018-03-01

    Breast cancer can be classified into four molecular subtypes of Luminal A, Luminal B, HER2 and Basal-like, which have significant differences in treatment and survival outcomes. We in this study aim to predict immunohistochemistry (IHC) determined molecular subtypes of breast cancer using image features derived from tumor and peritumoral stroma region based on diffusion weighted imaging (DWI). A dataset of 126 breast cancer patients were collected who underwent preoperative breast MRI with a 3T scanner. The apparent diffusion coefficients (ADCs) were recorded from DWI, and breast image was segmented into regions comprising the tumor and the surrounding stromal. Statistical characteristics in various breast tumor and peritumoral regions were computed, including mean, minimum, maximum, variance, interquartile range, range, skewness, and kurtosis of ADC values. Additionally, the difference of features between each two regions were also calculated. The univariate logistic based classifier was performed for evaluating the performance of the individual features for discriminating subtypes. For multi-class classification, multivariate logistic regression model was trained and validated. The results showed that the tumor boundary and proximal peritumoral stroma region derived features have a higher performance in classification compared to that of the other regions. Furthermore, the prediction model using statistical features, difference features and all the features combined from these regions generated AUC values of 0.774, 0.796 and 0.811, respectively. The results in this study indicate that ADC feature in tumor and peritumoral stromal region would be valuable for estimating the molecular subtype in breast cancer.

  2. Lung nodule malignancy classification using only radiologist-quantified image features as inputs to statistical learning algorithms: probing the Lung Image Database Consortium dataset with two statistical learning methods

    PubMed Central

    Hancock, Matthew C.; Magnan, Jerry F.

    2016-01-01

    Abstract. In the assessment of nodules in CT scans of the lungs, a number of image-derived features are diagnostically relevant. Currently, many of these features are defined only qualitatively, so they are difficult to quantify from first principles. Nevertheless, these features (through their qualitative definitions and interpretations thereof) are often quantified via a variety of mathematical methods for the purpose of computer-aided diagnosis (CAD). To determine the potential usefulness of quantified diagnostic image features as inputs to a CAD system, we investigate the predictive capability of statistical learning methods for classifying nodule malignancy. We utilize the Lung Image Database Consortium dataset and only employ the radiologist-assigned diagnostic feature values for the lung nodules therein, as well as our derived estimates of the diameter and volume of the nodules from the radiologists’ annotations. We calculate theoretical upper bounds on the classification accuracy that are achievable by an ideal classifier that only uses the radiologist-assigned feature values, and we obtain an accuracy of 85.74 (±1.14)%, which is, on average, 4.43% below the theoretical maximum of 90.17%. The corresponding area-under-the-curve (AUC) score is 0.932 (±0.012), which increases to 0.949 (±0.007) when diameter and volume features are included and has an accuracy of 88.08 (±1.11)%. Our results are comparable to those in the literature that use algorithmically derived image-based features, which supports our hypothesis that lung nodules can be classified as malignant or benign using only quantified, diagnostic image features, and indicates the competitiveness of this approach. We also analyze how the classification accuracy depends on specific features and feature subsets, and we rank the features according to their predictive power, statistically demonstrating the top four to be spiculation, lobulation, subtlety, and calcification. PMID:27990453

  3. Lung nodule malignancy classification using only radiologist-quantified image features as inputs to statistical learning algorithms: probing the Lung Image Database Consortium dataset with two statistical learning methods.

    PubMed

    Hancock, Matthew C; Magnan, Jerry F

    2016-10-01

    In the assessment of nodules in CT scans of the lungs, a number of image-derived features are diagnostically relevant. Currently, many of these features are defined only qualitatively, so they are difficult to quantify from first principles. Nevertheless, these features (through their qualitative definitions and interpretations thereof) are often quantified via a variety of mathematical methods for the purpose of computer-aided diagnosis (CAD). To determine the potential usefulness of quantified diagnostic image features as inputs to a CAD system, we investigate the predictive capability of statistical learning methods for classifying nodule malignancy. We utilize the Lung Image Database Consortium dataset and only employ the radiologist-assigned diagnostic feature values for the lung nodules therein, as well as our derived estimates of the diameter and volume of the nodules from the radiologists' annotations. We calculate theoretical upper bounds on the classification accuracy that are achievable by an ideal classifier that only uses the radiologist-assigned feature values, and we obtain an accuracy of 85.74 [Formula: see text], which is, on average, 4.43% below the theoretical maximum of 90.17%. The corresponding area-under-the-curve (AUC) score is 0.932 ([Formula: see text]), which increases to 0.949 ([Formula: see text]) when diameter and volume features are included and has an accuracy of 88.08 [Formula: see text]. Our results are comparable to those in the literature that use algorithmically derived image-based features, which supports our hypothesis that lung nodules can be classified as malignant or benign using only quantified, diagnostic image features, and indicates the competitiveness of this approach. We also analyze how the classification accuracy depends on specific features and feature subsets, and we rank the features according to their predictive power, statistically demonstrating the top four to be spiculation, lobulation, subtlety, and calcification.

  4. Hyperspectral image classification based on local binary patterns and PCANet

    NASA Astrophysics Data System (ADS)

    Yang, Huizhen; Gao, Feng; Dong, Junyu; Yang, Yang

    2018-04-01

    Hyperspectral image classification has been well acknowledged as one of the challenging tasks of hyperspectral data processing. In this paper, we propose a novel hyperspectral image classification framework based on local binary pattern (LBP) features and PCANet. In the proposed method, linear prediction error (LPE) is first employed to select a subset of informative bands, and LBP is utilized to extract texture features. Then, spectral and texture features are stacked into a high dimensional vectors. Next, the extracted features of a specified position are transformed to a 2-D image. The obtained images of all pixels are fed into PCANet for classification. Experimental results on real hyperspectral dataset demonstrate the effectiveness of the proposed method.

  5. Temporal assessment of radiomic features on clinical mammography in a high-risk population

    NASA Astrophysics Data System (ADS)

    Mendel, Kayla R.; Li, Hui; Lan, Li; Chan, Chun-Wai; King, Lauren M.; Tayob, Nabihah; Whitman, Gary; El-Zein, Randa; Bedrosian, Isabelle; Giger, Maryellen L.

    2018-02-01

    Extraction of high-dimensional quantitative data from medical images has become necessary in disease risk assessment, diagnostics and prognostics. Radiomic workflows for mammography typically involve a single medical image for each patient although medical images may exist for multiple imaging exams, especially in screening protocols. Our study takes advantage of the availability of mammograms acquired over multiple years for the prediction of cancer onset. This study included 841 images from 328 patients who developed subsequent mammographic abnormalities, which were confirmed as either cancer (n=173) or non-cancer (n=155) through diagnostic core needle biopsy. Quantitative radiomic analysis was conducted on antecedent FFDMs acquired a year or more prior to diagnostic biopsy. Analysis was limited to the breast contralateral to that in which the abnormality arose. Novel metrics were used to identify robust radiomic features. The most robust features were evaluated in the task of predicting future malignancies on a subset of 72 subjects (23 cancer cases and 49 non-cancer controls) with mammograms over multiple years. Using linear discriminant analysis, the robust radiomic features were merged into predictive signatures by: (i) using features from only the most recent contralateral mammogram, (ii) change in feature values between mammograms, and (iii) ratio of feature values over time, yielding AUCs of 0.57 (SE=0.07), 0.63 (SE=0.06), and 0.66 (SE=0.06), respectively. The AUCs for temporal radiomics (ratio) statistically differed from chance, suggesting that changes in radiomics over time may be critical for risk assessment. Overall, we found that our two-stage process of robustness assessment followed by performance evaluation served well in our investigation on the role of temporal radiomics in risk assessment.

  6. Deep-Learning Convolutional Neural Networks Accurately Classify Genetic Mutations in Gliomas.

    PubMed

    Chang, P; Grinband, J; Weinberg, B D; Bardis, M; Khy, M; Cadena, G; Su, M-Y; Cha, S; Filippi, C G; Bota, D; Baldi, P; Poisson, L M; Jain, R; Chow, D

    2018-05-10

    The World Health Organization has recently placed new emphasis on the integration of genetic information for gliomas. While tissue sampling remains the criterion standard, noninvasive imaging techniques may provide complimentary insight into clinically relevant genetic mutations. Our aim was to train a convolutional neural network to independently predict underlying molecular genetic mutation status in gliomas with high accuracy and identify the most predictive imaging features for each mutation. MR imaging data and molecular information were retrospectively obtained from The Cancer Imaging Archives for 259 patients with either low- or high-grade gliomas. A convolutional neural network was trained to classify isocitrate dehydrogenase 1 ( IDH1 ) mutation status, 1p/19q codeletion, and O6-methylguanine-DNA methyltransferase ( MGMT ) promotor methylation status. Principal component analysis of the final convolutional neural network layer was used to extract the key imaging features critical for successful classification. Classification had high accuracy: IDH1 mutation status, 94%; 1p/19q codeletion, 92%; and MGMT promotor methylation status, 83%. Each genetic category was also associated with distinctive imaging features such as definition of tumor margins, T1 and FLAIR suppression, extent of edema, extent of necrosis, and textural features. Our results indicate that for The Cancer Imaging Archives dataset, machine-learning approaches allow classification of individual genetic mutations of both low- and high-grade gliomas. We show that relevant MR imaging features acquired from an added dimensionality-reduction technique demonstrate that neural networks are capable of learning key imaging components without prior feature selection or human-directed training. © 2018 by American Journal of Neuroradiology.

  7. Assessment of global and local region-based bilateral mammographic feature asymmetry to predict short-term breast cancer risk

    NASA Astrophysics Data System (ADS)

    Li, Yane; Fan, Ming; Cheng, Hu; Zhang, Peng; Zheng, Bin; Li, Lihua

    2018-01-01

    This study aims to develop and test a new imaging marker-based short-term breast cancer risk prediction model. An age-matched dataset of 566 screening mammography cases was used. All ‘prior’ images acquired in the two screening series were negative, while in the ‘current’ screening images, 283 cases were positive for cancer and 283 cases remained negative. For each case, two bilateral cranio-caudal view mammograms acquired from the ‘prior’ negative screenings were selected and processed by a computer-aided image processing scheme, which segmented the entire breast area into nine strip-based local regions, extracted the element regions using difference of Gaussian filters, and computed both global- and local-based bilateral asymmetrical image features. An initial feature pool included 190 features related to the spatial distribution and structural similarity of grayscale values, as well as of the magnitude and phase responses of multidirectional Gabor filters. Next, a short-term breast cancer risk prediction model based on a generalized linear model was built using an embedded stepwise regression analysis method to select features and a leave-one-case-out cross-validation method to predict the likelihood of each woman having image-detectable cancer in the next sequential mammography screening. The area under the receiver operating characteristic curve (AUC) values significantly increased from 0.5863  ±  0.0237 to 0.6870  ±  0.0220 when the model trained by the image features extracted from the global regions and by the features extracted from both the global and the matched local regions (p  =  0.0001). The odds ratio values monotonically increased from 1.00-8.11 with a significantly increasing trend in slope (p  =  0.0028) as the model-generated risk score increased. In addition, the AUC values were 0.6555  ±  0.0437, 0.6958  ±  0.0290, and 0.7054  ±  0.0529 for the three age groups of 37-49, 50-65, and 66-87 years old, respectively. AUC values of 0.6529  ±  0.1100, 0.6820  ±  0.0353, 0.6836  ±  0.0302 and 0.8043  ±  0.1067 were yielded for the four mammography density sub-groups (BIRADS from 1-4), respectively. This study demonstrated that bilateral asymmetry features extracted from local regions combined with the global region in bilateral negative mammograms could be used as a new imaging marker to assist in the prediction of short-term breast cancer risk.

  8. Predicting Cortical Dark/Bright Asymmetries from Natural Image Statistics and Early Visual Transforms

    PubMed Central

    Cooper, Emily A.; Norcia, Anthony M.

    2015-01-01

    The nervous system has evolved in an environment with structure and predictability. One of the ubiquitous principles of sensory systems is the creation of circuits that capitalize on this predictability. Previous work has identified predictable non-uniformities in the distributions of basic visual features in natural images that are relevant to the encoding tasks of the visual system. Here, we report that the well-established statistical distributions of visual features -- such as visual contrast, spatial scale, and depth -- differ between bright and dark image components. Following this analysis, we go on to trace how these differences in natural images translate into different patterns of cortical input that arise from the separate bright (ON) and dark (OFF) pathways originating in the retina. We use models of these early visual pathways to transform natural images into statistical patterns of cortical input. The models include the receptive fields and non-linear response properties of the magnocellular (M) and parvocellular (P) pathways, with their ON and OFF pathway divisions. The results indicate that there are regularities in visual cortical input beyond those that have previously been appreciated from the direct analysis of natural images. In particular, several dark/bright asymmetries provide a potential account for recently discovered asymmetries in how the brain processes visual features, such as violations of classic energy-type models. On the basis of our analysis, we expect that the dark/bright dichotomy in natural images plays a key role in the generation of both cortical and perceptual asymmetries. PMID:26020624

  9. Image Feature Types and Their Predictions of Aesthetic Preference and Naturalness

    PubMed Central

    Ibarra, Frank F.; Kardan, Omid; Hunter, MaryCarol R.; Kotabe, Hiroki P.; Meyer, Francisco A. C.; Berman, Marc G.

    2017-01-01

    Previous research has investigated ways to quantify visual information of a scene in terms of a visual processing hierarchy, i.e., making sense of visual environment by segmentation and integration of elementary sensory input. Guided by this research, studies have developed categories for low-level visual features (e.g., edges, colors), high-level visual features (scene-level entities that convey semantic information such as objects), and how models of those features predict aesthetic preference and naturalness. For example, in Kardan et al. (2015a), 52 participants provided aesthetic preference and naturalness ratings, which are used in the current study, for 307 images of mixed natural and urban content. Kardan et al. (2015a) then developed a model using low-level features to predict aesthetic preference and naturalness and could do so with high accuracy. What has yet to be explored is the ability of higher-level visual features (e.g., horizon line position relative to viewer, geometry of building distribution relative to visual access) to predict aesthetic preference and naturalness of scenes, and whether higher-level features mediate some of the association between the low-level features and aesthetic preference or naturalness. In this study we investigated these relationships and found that low- and high- level features explain 68.4% of the variance in aesthetic preference ratings and 88.7% of the variance in naturalness ratings. Additionally, several high-level features mediated the relationship between the low-level visual features and aaesthetic preference. In a multiple mediation analysis, the high-level feature mediators accounted for over 50% of the variance in predicting aesthetic preference. These results show that high-level visual features play a prominent role predicting aesthetic preference, but do not completely eliminate the predictive power of the low-level visual features. These strong predictors provide powerful insights for future research relating to landscape and urban design with the aim of maximizing subjective well-being, which could lead to improved health outcomes on a larger scale. PMID:28503158

  10. Radiomic texture-curvature (RTC) features for precision medicine of patients with rheumatoid arthritis-associated interstitial lung disease

    NASA Astrophysics Data System (ADS)

    Watari, Chinatsu; Matsuhiro, Mikio; Näppi, Janne J.; Nasirudin, Radin A.; Hironaka, Toru; Kawata, Yoshiki; Niki, Noboru; Yoshida, Hiroyuki

    2018-03-01

    We investigated the effect of radiomic texture-curvature (RTC) features of lung CT images in the prediction of the overall survival of patients with rheumatoid arthritis-associated interstitial lung disease (RA-ILD). We retrospectively collected 70 RA-ILD patients who underwent thin-section lung CT and serial pulmonary function tests. After the extraction of the lung region, we computed hyper-curvature features that included the principal curvatures, curvedness, bright/dark sheets, cylinders, blobs, and curvature scales for the bronchi and the aerated lungs. We also computed gray-level co-occurrence matrix (GLCM) texture features on the segmented lungs. An elastic-net penalty method was used to select and combine these features with a Cox proportional hazards model for predicting the survival of the patient. Evaluation was performed by use of concordance index (C-index) as a measure of prediction performance. The C-index values of the texture features, hyper-curvature features, and the combination thereof (RTC features) in predicting patient survival was estimated by use of bootstrapping with 2,000 replications, and they were compared with an established clinical prognostic biomarker known as the gender, age, and physiology (GAP) index by means of two-sided t-test. Bootstrap evaluation yielded the following C-index values for the clinical and radiomic features: (a) GAP index: 78.3%; (b) GLCM texture features: 79.6%; (c) hypercurvature features: 80.8%; and (d) RTC features: 86.8%. The RTC features significantly outperformed any of the other predictors (P < 0.001). The Kaplan-Meier survival curves of patients stratified to low- and high-risk groups based on the RTC features showed statistically significant (P < 0.0001) difference. Thus, the RTC features can provide an effective imaging biomarker for predicting the overall survival of patients with RA-ILD.

  11. A Bayesian framework for early risk prediction in traumatic brain injury

    NASA Astrophysics Data System (ADS)

    Chaganti, Shikha; Plassard, Andrew J.; Wilson, Laura; Smith, Miya A.; Patel, Mayur B.; Landman, Bennett A.

    2016-03-01

    Early detection of risk is critical in determining the course of treatment in traumatic brain injury (TBI). Computed tomography (CT) acquired at admission has shown latent prognostic value in prior studies; however, no robust clinical risk predictions have been achieved based on the imaging data in large-scale TBI analysis. The major challenge lies in the lack of consistent and complete medical records for patients, and an inherent bias associated with the limited number of patients samples with high-risk outcomes in available TBI datasets. Herein, we propose a Bayesian framework with mutual information-based forward feature selection to handle this type of data. Using multi-atlas segmentation, 154 image-based features (capturing intensity, volume and texture) were computed over 22 ROIs in 1791 CT scans. These features were combined with 14 clinical parameters and converted into risk likelihood scores using Bayes modeling. We explore the prediction power of the image features versus the clinical measures for various risk outcomes. The imaging data alone were more predictive of outcomes than the clinical data (including Marshall CT classification) for discharge disposition with an area under the curve of 0.81 vs. 0.67, but less predictive than clinical data for discharge Glasgow Coma Scale (GCS) score with an area under the curve of 0.65 vs. 0.85. However, in both cases, combining imaging and clinical data increased the combined area under the curve with 0.86 for discharge disposition and 0.88 for discharge GCS score. In conclusion, CT data have meaningful prognostic value for TBI patients beyond what is captured in clinical measures and the Marshall CT classification.

  12. Learning better deep features for the prediction of occult invasive disease in ductal carcinoma in situ through transfer learning

    NASA Astrophysics Data System (ADS)

    Shi, Bibo; Hou, Rui; Mazurowski, Maciej A.; Grimm, Lars J.; Ren, Yinhao; Marks, Jeffrey R.; King, Lorraine M.; Maley, Carlo C.; Hwang, E. Shelley; Lo, Joseph Y.

    2018-02-01

    Purpose: To determine whether domain transfer learning can improve the performance of deep features extracted from digital mammograms using a pre-trained deep convolutional neural network (CNN) in the prediction of occult invasive disease for patients with ductal carcinoma in situ (DCIS) on core needle biopsy. Method: In this study, we collected digital mammography magnification views for 140 patients with DCIS at biopsy, 35 of which were subsequently upstaged to invasive cancer. We utilized a deep CNN model that was pre-trained on two natural image data sets (ImageNet and DTD) and one mammographic data set (INbreast) as the feature extractor, hypothesizing that these data sets are increasingly more similar to our target task and will lead to better representations of deep features to describe DCIS lesions. Through a statistical pooling strategy, three sets of deep features were extracted using the CNNs at different levels of convolutional layers from the lesion areas. A logistic regression classifier was then trained to predict which tumors contain occult invasive disease. The generalization performance was assessed and compared using repeated random sub-sampling validation and receiver operating characteristic (ROC) curve analysis. Result: The best performance of deep features was from CNN model pre-trained on INbreast, and the proposed classifier using this set of deep features was able to achieve a median classification performance of ROC-AUC equal to 0.75, which is significantly better (p<=0.05) than the performance of deep features extracted using ImageNet data set (ROCAUC = 0.68). Conclusion: Transfer learning is helpful for learning a better representation of deep features, and improves the prediction of occult invasive disease in DCIS.

  13. SU-D-207B-02: Early Grade Classification in Meningioma Patients Combining Radiomics and Semantics Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coroller, T; Bi, W; Abedalthagafi, M

    Purpose: The clinical management of meningioma is guided by its grade and biologic behavior. Currently, diagnosis of tumor grade follows surgical resection and histopathologic review. Reliable techniques for pre-operative determination of tumor behavior are needed. We investigated the association between imaging features extracted from preoperative gadolinium-enhanced T1-weighted MRI and meningioma grade. Methods: We retrospectively examined the pre-operative MRI for 139 patients with de novo WHO grade I (63%) and grade II (37%) meningiomas. We investigated the predictive power of ten semantic radiologic features as determined by a neuroradiologist, fifteen radiomic features, and tumor location. Conventional (volume and diameter) imaging featuresmore » were added for comparison. AUC was computed for continuous and χ{sup 2} for discrete variables. Classification was done using random forest. Performance was evaluated using cross validation (1000 iterations, 75% training and 25% validation). All p-values were adjusted for multiple testing. Results: Significant association was observed between meningioma grade and tumor location (p<0.001) and two semantic features including intra-tumoral heterogeneity (p<0.001) and overt hemorrhage (p=0.01). Conventional (AUC 0.61–0.67) and eleven radiomic (AUC 0.60–0.70) features were significant from random (p<0.05, Noether test). Median AUC values for classification of tumor grade were 0.57, 0.71, 0.72 and 0.77 respectively for conventional, radiomic, location, and semantic features after using random forest. By combining all imaging data (semantic, radiomic, and location), the median AUC was 0.81, which offers superior predicting power to that of conventional imaging descriptors for meningioma as well as radiomic features alone (p<0.05, permutation test). Conclusion: We demonstrate a strong association between radiologic features and meningioma grade. Pre-operative prediction of tumor behavior based on imaging features offers promise for guiding personalized medicine and improving patient management.« less

  14. Semantic image segmentation with fused CNN features

    NASA Astrophysics Data System (ADS)

    Geng, Hui-qiang; Zhang, Hua; Xue, Yan-bing; Zhou, Mian; Xu, Guang-ping; Gao, Zan

    2017-09-01

    Semantic image segmentation is a task to predict a category label for every image pixel. The key challenge of it is to design a strong feature representation. In this paper, we fuse the hierarchical convolutional neural network (CNN) features and the region-based features as the feature representation. The hierarchical features contain more global information, while the region-based features contain more local information. The combination of these two kinds of features significantly enhances the feature representation. Then the fused features are used to train a softmax classifier to produce per-pixel label assignment probability. And a fully connected conditional random field (CRF) is used as a post-processing method to improve the labeling consistency. We conduct experiments on SIFT flow dataset. The pixel accuracy and class accuracy are 84.4% and 34.86%, respectively.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aghaei, Faranak; Tan, Maxine; Liu, Hong

    Purpose: To identify a new clinical marker based on quantitative kinetic image features analysis and assess its feasibility to predict tumor response to neoadjuvant chemotherapy. Methods: The authors assembled a dataset involving breast MR images acquired from 68 cancer patients before undergoing neoadjuvant chemotherapy. Among them, 25 patients had complete response (CR) and 43 had partial and nonresponse (NR) to chemotherapy based on the response evaluation criteria in solid tumors. The authors developed a computer-aided detection scheme to segment breast areas and tumors depicted on the breast MR images and computed a total of 39 kinetic image features from bothmore » tumor and background parenchymal enhancement regions. The authors then applied and tested two approaches to classify between CR and NR cases. The first one analyzed each individual feature and applied a simple feature fusion method that combines classification results from multiple features. The second approach tested an attribute selected classifier that integrates an artificial neural network (ANN) with a wrapper subset evaluator, which was optimized using a leave-one-case-out validation method. Results: In the pool of 39 features, 10 yielded relatively higher classification performance with the areas under receiver operating characteristic curves (AUCs) ranging from 0.61 to 0.78 to classify between CR and NR cases. Using a feature fusion method, the maximum AUC = 0.85 ± 0.05. Using the ANN-based classifier, AUC value significantly increased to 0.96 ± 0.03 (p < 0.01). Conclusions: This study demonstrated that quantitative analysis of kinetic image features computed from breast MR images acquired prechemotherapy has potential to generate a useful clinical marker in predicting tumor response to chemotherapy.« less

  16. Shell feature: a new radiomics descriptor for predicting distant failure after radiotherapy in non-small cell lung cancer and cervix cancer

    NASA Astrophysics Data System (ADS)

    Hao, Hongxia; Zhou, Zhiguo; Li, Shulong; Maquilan, Genevieve; Folkert, Michael R.; Iyengar, Puneeth; Westover, Kenneth D.; Albuquerque, Kevin; Liu, Fang; Choy, Hak; Timmerman, Robert; Yang, Lin; Wang, Jing

    2018-05-01

    Distant failure is the main cause of human cancer-related mortalities. To develop a model for predicting distant failure in non-small cell lung cancer (NSCLC) and cervix cancer (CC) patients, a shell feature, consisting of outer voxels around the tumor boundary, was constructed using pre-treatment positron emission tomography (PET) images from 48 NSCLC patients received stereotactic body radiation therapy and 52 CC patients underwent external beam radiation therapy and concurrent chemotherapy followed with high-dose-rate intracavitary brachytherapy. The hypothesis behind this feature is that non-invasive and invasive tumors may have different morphologic patterns in the tumor periphery, in turn reflecting the differences in radiological presentations in the PET images. The utility of the shell was evaluated by the support vector machine classifier in comparison with intensity, geometry, gray level co-occurrence matrix-based texture, neighborhood gray tone difference matrix-based texture, and a combination of these four features. The results were assessed in terms of accuracy, sensitivity, specificity, and AUC. Collectively, the shell feature showed better predictive performance than all the other features for distant failure prediction in both NSCLC and CC cohorts.

  17. Sparse Bayesian Learning for Identifying Imaging Biomarkers in AD Prediction

    PubMed Central

    Shen, Li; Qi, Yuan; Kim, Sungeun; Nho, Kwangsik; Wan, Jing; Risacher, Shannon L.; Saykin, Andrew J.

    2010-01-01

    We apply sparse Bayesian learning methods, automatic relevance determination (ARD) and predictive ARD (PARD), to Alzheimer’s disease (AD) classification to make accurate prediction and identify critical imaging markers relevant to AD at the same time. ARD is one of the most successful Bayesian feature selection methods. PARD is a powerful Bayesian feature selection method, and provides sparse models that is easy to interpret. PARD selects the model with the best estimate of the predictive performance instead of choosing the one with the largest marginal model likelihood. Comparative study with support vector machine (SVM) shows that ARD/PARD in general outperform SVM in terms of prediction accuracy. Additional comparison with surface-based general linear model (GLM) analysis shows that regions with strongest signals are identified by both GLM and ARD/PARD. While GLM P-map returns significant regions all over the cortex, ARD/PARD provide a small number of relevant and meaningful imaging markers with predictive power, including both cortical and subcortical measures. PMID:20879451

  18. SEGMENTATION OF MITOCHONDRIA IN ELECTRON MICROSCOPY IMAGES USING ALGEBRAIC CURVES.

    PubMed

    Seyedhosseini, Mojtaba; Ellisman, Mark H; Tasdizen, Tolga

    2013-01-01

    High-resolution microscopy techniques have been used to generate large volumes of data with enough details for understanding the complex structure of the nervous system. However, automatic techniques are required to segment cells and intracellular structures in these multi-terabyte datasets and make anatomical analysis possible on a large scale. We propose a fully automated method that exploits both shape information and regional statistics to segment irregularly shaped intracellular structures such as mitochondria in electron microscopy (EM) images. The main idea is to use algebraic curves to extract shape features together with texture features from image patches. Then, these powerful features are used to learn a random forest classifier, which can predict mitochondria locations precisely. Finally, the algebraic curves together with regional information are used to segment the mitochondria at the predicted locations. We demonstrate that our method outperforms the state-of-the-art algorithms in segmentation of mitochondria in EM images.

  19. TU-CD-BRB-08: Radiomic Analysis of FDG-PET Identifies Novel Prognostic Imaging Biomarkers in Locally Advanced Pancreatic Cancer Patients Treated with SBRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Y; Shirato, H; Song, J

    2015-06-15

    Purpose: This study aims to identify novel prognostic imaging biomarkers in locally advanced pancreatic cancer (LAPC) using quantitative, high-throughput image analysis. Methods: 86 patients with LAPC receiving chemotherapy followed by SBRT were retrospectively studied. All patients had a baseline FDG-PET scan prior to SBRT. For each patient, we extracted 435 PET imaging features of five types: statistical, morphological, textural, histogram, and wavelet. These features went through redundancy checks, robustness analysis, as well as a prescreening process based on their concordance indices with respect to the relevant outcomes. We then performed principle component analysis on the remaining features (number ranged frommore » 10 to 16), and fitted a Cox proportional hazard regression model using the first 3 principle components. Kaplan-Meier analysis was used to assess the ability to distinguish high versus low-risk patients separated by median predicted survival. To avoid overfitting, all evaluations were based on leave-one-out cross validation (LOOCV), in which each holdout patient was assigned to a risk group according to the model obtained from a separate training set. Results: For predicting overall survival (OS), the most dominant imaging features were wavelet coefficients. There was a statistically significant difference in OS between patients with predicted high and low-risk based on LOOCV (hazard ratio: 2.26, p<0.001). Similar imaging features were also strongly associated with local progression-free survival (LPFS) (hazard ratio: 1.53, p=0.026) on LOOCV. In comparison, neither SUVmax nor TLG was associated with LPFS (p=0.103, p=0.433) (Table 1). Results for progression-free survival and distant progression-free survival showed similar trends. Conclusion: Radiomic analysis identified novel imaging features that showed improved prognostic value over conventional methods. These features characterize the degree of intra-tumor heterogeneity reflected on FDG-PET images, and their biological underpinnings warrant further investigation. If validated in large, prospective cohorts, this method could be used to stratify patients based on individualized risk.« less

  20. Combining the genetic algorithm and successive projection algorithm for the selection of feature wavelengths to evaluate exudative characteristics in frozen-thawed fish muscle.

    PubMed

    Cheng, Jun-Hu; Sun, Da-Wen; Pu, Hongbin

    2016-04-15

    The potential use of feature wavelengths for predicting drip loss in grass carp fish, as affected by being frozen at -20°C for 24 h and thawed at 4°C for 1, 2, 4, and 6 days, was investigated. Hyperspectral images of frozen-thawed fish were obtained and their corresponding spectra were extracted. Least-squares support vector machine and multiple linear regression (MLR) models were established using five key wavelengths, selected by combining a genetic algorithm and successive projections algorithm, and this showed satisfactory performance in drip loss prediction. The MLR model with a determination coefficient of prediction (R(2)P) of 0.9258, and lower root mean square error estimated by a prediction (RMSEP) of 1.12%, was applied to transfer each pixel of the image and generate the distribution maps of exudation changes. The results confirmed that it is feasible to identify the feature wavelengths using variable selection methods and chemometric analysis for developing on-line multispectral imaging. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Using Deep Learning Model for Meteorological Satellite Cloud Image Prediction

    NASA Astrophysics Data System (ADS)

    Su, X.

    2017-12-01

    A satellite cloud image contains much weather information such as precipitation information. Short-time cloud movement forecast is important for precipitation forecast and is the primary means for typhoon monitoring. The traditional methods are mostly using the cloud feature matching and linear extrapolation to predict the cloud movement, which makes that the nonstationary process such as inversion and deformation during the movement of the cloud is basically not considered. It is still a hard task to predict cloud movement timely and correctly. As deep learning model could perform well in learning spatiotemporal features, to meet this challenge, we could regard cloud image prediction as a spatiotemporal sequence forecasting problem and introduce deep learning model to solve this problem. In this research, we use a variant of Gated-Recurrent-Unit(GRU) that has convolutional structures to deal with spatiotemporal features and build an end-to-end model to solve this forecast problem. In this model, both the input and output are spatiotemporal sequences. Compared to Convolutional LSTM(ConvLSTM) model, this model has lower amount of parameters. We imply this model on GOES satellite data and the model perform well.

  2. Improving performance of breast cancer risk prediction using a new CAD-based region segmentation scheme

    NASA Astrophysics Data System (ADS)

    Heidari, Morteza; Zargari Khuzani, Abolfazl; Danala, Gopichandh; Qiu, Yuchen; Zheng, Bin

    2018-02-01

    Objective of this study is to develop and test a new computer-aided detection (CAD) scheme with improved region of interest (ROI) segmentation combined with an image feature extraction framework to improve performance in predicting short-term breast cancer risk. A dataset involving 570 sets of "prior" negative mammography screening cases was retrospectively assembled. In the next sequential "current" screening, 285 cases were positive and 285 cases remained negative. A CAD scheme was applied to all 570 "prior" negative images to stratify cases into the high and low risk case group of having cancer detected in the "current" screening. First, a new ROI segmentation algorithm was used to automatically remove useless area of mammograms. Second, from the matched bilateral craniocaudal view images, a set of 43 image features related to frequency characteristics of ROIs were initially computed from the discrete cosine transform and spatial domain of the images. Third, a support vector machine model based machine learning classifier was used to optimally classify the selected optimal image features to build a CAD-based risk prediction model. The classifier was trained using a leave-one-case-out based cross-validation method. Applying this improved CAD scheme to the testing dataset, an area under ROC curve, AUC = 0.70+/-0.04, which was significantly higher than using the extracting features directly from the dataset without the improved ROI segmentation step (AUC = 0.63+/-0.04). This study demonstrated that the proposed approach could improve accuracy on predicting short-term breast cancer risk, which may play an important role in helping eventually establish an optimal personalized breast cancer paradigm.

  3. Accuracy of ultrasonography and magnetic resonance imaging in the diagnosis of placenta accreta.

    PubMed

    Riteau, Anne-Sophie; Tassin, Mikael; Chambon, Guillemette; Le Vaillant, Claudine; de Laveaucoupet, Jocelyne; Quéré, Marie-Pierre; Joubert, Madeleine; Prevot, Sophie; Philippe, Henri-Jean; Benachi, Alexandra

    2014-01-01

    To evaluate the accuracy of ultrasonography and magnetic resonance imaging (MRI) in the diagnosis of placenta accreta and to define the most relevant specific ultrasound and MRI features that may predict placental invasion. This study was approved by the institutional review board of the French College of Obstetricians and Gynecologists. We retrospectively reviewed the medical records of all patients referred for suspected placenta accreta to two university hospitals from 01/2001 to 05/2012. Our study population included 42 pregnant women who had been investigated by both ultrasonography and MRI. Ultrasound images and MRI were blindly reassessed for each case by 2 raters in order to score features that predict abnormal placental invasion. Sensitivity in the diagnosis of placenta accreta was 100% with ultrasound and 76.9% for MRI (P = 0.03). Specificity was 37.5% with ultrasonography and 50% for MRI (P = 0.6). The features of greatest sensitivity on ultrasonography were intraplacental lacunae and loss of the normal retroplacental clear space. Increased vascularization in the uterine serosa-bladder wall interface and vascularization perpendicular to the uterine wall had the best positive predictive value (92%). At MRI, uterine bulging had the best positive predictive value (85%) and its combination with the presence of dark intraplacental bands on T2-weighted images improved the predictive value to 90%. Ultrasound imaging is the mainstay of screening for placenta accreta. MRI appears to be complementary to ultrasonography, especially when there are few ultrasound signs.

  4. Predictive value of initial FDG-PET features for treatment response and survival in esophageal cancer patients treated with chemo-radiation therapy using a random forest classifier.

    PubMed

    Desbordes, Paul; Ruan, Su; Modzelewski, Romain; Pineau, Pascal; Vauclin, Sébastien; Gouel, Pierrick; Michel, Pierre; Di Fiore, Frédéric; Vera, Pierre; Gardin, Isabelle

    2017-01-01

    In oncology, texture features extracted from positron emission tomography with 18-fluorodeoxyglucose images (FDG-PET) are of increasing interest for predictive and prognostic studies, leading to several tens of features per tumor. To select the best features, the use of a random forest (RF) classifier was investigated. Sixty-five patients with an esophageal cancer treated with a combined chemo-radiation therapy were retrospectively included. All patients underwent a pretreatment whole-body FDG-PET. The patients were followed for 3 years after the end of the treatment. The response assessment was performed 1 month after the end of the therapy. Patients were classified as complete responders and non-complete responders. Sixty-one features were extracted from medical records and PET images. First, Spearman's analysis was performed to eliminate correlated features. Then, the best predictive and prognostic subsets of features were selected using a RF algorithm. These results were compared to those obtained by a Mann-Whitney U test (predictive study) and a univariate Kaplan-Meier analysis (prognostic study). Among the 61 initial features, 28 were not correlated. From these 28 features, the best subset of complementary features found using the RF classifier to predict response was composed of 2 features: metabolic tumor volume (MTV) and homogeneity from the co-occurrence matrix. The corresponding predictive value (AUC = 0.836 ± 0.105, Se = 82 ± 9%, Sp = 91 ± 12%) was higher than the best predictive results found using the Mann-Whitney test: busyness from the gray level difference matrix (P < 0.0001, AUC = 0.810, Se = 66%, Sp = 88%). The best prognostic subset found using RF was composed of 3 features: MTV and 2 clinical features (WHO status and nutritional risk index) (AUC = 0.822 ± 0.059, Se = 79 ± 9%, Sp = 95 ± 6%), while no feature was significantly prognostic according to the Kaplan-Meier analysis. The RF classifier can improve predictive and prognostic values compared to the Mann-Whitney U test and the univariate Kaplan-Meier survival analysis when applied to several tens of features in a limited patient database.

  5. Imaging patterns predict patient survival and molecular subtype in glioblastoma via machine learning techniques.

    PubMed

    Macyszyn, Luke; Akbari, Hamed; Pisapia, Jared M; Da, Xiao; Attiah, Mark; Pigrish, Vadim; Bi, Yingtao; Pal, Sharmistha; Davuluri, Ramana V; Roccograndi, Laura; Dahmane, Nadia; Martinez-Lage, Maria; Biros, George; Wolf, Ronald L; Bilello, Michel; O'Rourke, Donald M; Davatzikos, Christos

    2016-03-01

    MRI characteristics of brain gliomas have been used to predict clinical outcome and molecular tumor characteristics. However, previously reported imaging biomarkers have not been sufficiently accurate or reproducible to enter routine clinical practice and often rely on relatively simple MRI measures. The current study leverages advanced image analysis and machine learning algorithms to identify complex and reproducible imaging patterns predictive of overall survival and molecular subtype in glioblastoma (GB). One hundred five patients with GB were first used to extract approximately 60 diverse features from preoperative multiparametric MRIs. These imaging features were used by a machine learning algorithm to derive imaging predictors of patient survival and molecular subtype. Cross-validation ensured generalizability of these predictors to new patients. Subsequently, the predictors were evaluated in a prospective cohort of 29 new patients. Survival curves yielded a hazard ratio of 10.64 for predicted long versus short survivors. The overall, 3-way (long/medium/short survival) accuracy in the prospective cohort approached 80%. Classification of patients into the 4 molecular subtypes of GB achieved 76% accuracy. By employing machine learning techniques, we were able to demonstrate that imaging patterns are highly predictive of patient survival. Additionally, we found that GB subtypes have distinctive imaging phenotypes. These results reveal that when imaging markers related to infiltration, cell density, microvascularity, and blood-brain barrier compromise are integrated via advanced pattern analysis methods, they form very accurate predictive biomarkers. These predictive markers used solely preoperative images, hence they can significantly augment diagnosis and treatment of GB patients. © The Author(s) 2015. Published by Oxford University Press on behalf of the Society for Neuro-Oncology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Quantitative analysis of adipose tissue on chest CT to predict primary graft dysfunction in lung transplant recipients: a novel optimal biomarker approach

    NASA Astrophysics Data System (ADS)

    Tong, Yubing; Udupa, Jayaram K.; Wang, Chuang; Wu, Caiyun; Pednekar, Gargi; Restivo, Michaela D.; Lederer, David J.; Christie, Jason D.; Torigian, Drew A.

    2018-02-01

    In this study, patients who underwent lung transplantation are categorized into two groups of successful (positive) or failed (negative) transplantations according to primary graft dysfunction (PGD), i.e., acute lung injury within 72 hours of lung transplantation. Obesity or being underweight is associated with an increased risk of PGD. Adipose quantification and characterization via computed tomography (CT) imaging is an evolving topic of interest. However, very little research of PGD prediction using adipose quantity or characteristics derived from medical images has been performed. The aim of this study is to explore image-based features of thoracic adipose tissue on pre-operative chest CT to distinguish between the above two groups of patients. 140 unenhanced chest CT images from three lung transplant centers (Columbia, Penn, and Duke) are included in this study. 124 patients are in the successful group and 16 in failure group. Chest CT slices at the T7 and T8 vertebral levels are captured to represent the thoracic fat burden by using a standardized anatomic space (SAS) approach. Fat (subcutaneous adipose tissue (SAT)/ visceral adipose tissue (VAT)) intensity and texture properties (1142 in total) for each patient are collected, and then an optimal feature set is selected to maximize feature independence and separation between the two groups. Leave-one-out and leave-ten-out crossvalidation strategies are adopted to test the prediction ability based on those selected features all of which came from VAT texture properties. Accuracy of prediction (ACC), sensitivity (SEN), specificity (SPE), and area under the curve (AUC) of 0.87/0.97, 0.87/0.97, 0.88/1.00, and 0.88/0.99, respectively are achieved by the method. The optimal feature set includes only 5 features (also all from VAT), which might suggest that thoracic VAT plays a more important role than SAT in predicting PGD in lung transplant recipients.

  7. SU-F-R-04: Radiomics for Survival Prediction in Glioblastoma (GBM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, H; Molitoris, J; Bhooshan, N

    Purpose: To develop a quantitative radiomics approach for survival prediction of glioblastoma (GBM) patients treated with chemoradiotherapy (CRT). Methods: 28 GBM patients who received CRT at our institution were retrospectively studied. 255 radiomic features were extracted from 3 gadolinium-enhanced T1 weighted MRIs for 2 regions of interest (ROIs) (the surgical cavity and its surrounding enhancement rim). The 3 MRIs were at pre-treatment, 1-month and 3-month post-CRT. The imaging features comprehensively quantified the intensity, spatial variation (texture), geometric property and their spatial-temporal changes for the 2 ROIs. 3 demographics features (age, race, gender) and 12 clinical parameters (KPS, extent of resection,more » whether concurrent temozolomide was adjusted/stopped and radiotherapy related information) were also included. 4 Machine learning models (logistic regression (LR), support vector machine (SVM), decision tree (DT), neural network (NN)) were applied to predict overall survival (OS) and progression-free survival (PFS). The number of cases and percentage of cases predicted correctly were collected and AUC (area under the receiver operating characteristic (ROC) curve) were determined after leave-one-out cross-validation. Results: From univariate analysis, 27 features (1 demographic, 1 clinical and 25 imaging) were statistically significant (p<0.05) for both OS and PFS. Two sets of features (each contained 24 features) were algorithmically selected from all features to predict OS and PFS. High prediction accuracy of OS was achieved by using NN (96%, 27 of 28 cases were correctly predicted, AUC = 0.99), LR (93%, 26 of 28 cases were correctly predicted, AUC = 0.95) and SVM (93%, 26 of 28 cases were correctly predicted, AUC = 0.90). When predicting PFS, NN obtained the highest prediction accuracy (89%, 25 of 28 cases were correctly predicted, AUC = 0.92). Conclusion: Radiomics approach combined with patients’ demographics and clinical parameters can accurately predict survival in GBM patients treated with CRT.« less

  8. MO-DE-207B-01: JACK FOWLER JUNIOR INVESTIGATOR COMPETITION WINNER: Between Somatic Mutations and PET-Based Radiomic Features in Non-Small Cell Lung Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yip, S; Coroller, T; Rios Velazquez, E

    Purpose: Although PET-based radiomic features have been proposed to quantify tumor heterogeneity and shown promise in outcome prediction, little is known about their relationship with tumor genetics. This study assessed the association of [{sup 18}F]fluorodeoxyglucose (FDG)-PET-based radiomic features with non-small cell lung cancer (NSCLC) mutations. Methods: 348 NSCLC patients underwent FDG-PET/CT scans before treatment and were tested for genetic mutations. 13% (44/348) and 28% (96/348) patients were found to harbor EGFR (EGFR+) and KRAS (KRAS+) mutations, respectively. We evaluated nineteen PET-based radiomic features quantifying phenotypic traits, and compared them with conventional PET features (metabolic tumor volume (MTV) and maximum-SUV). Themore » association between the feature values and mutation status was evaluated using the Wilcoxcon-rank-sum-test. The ability of each measure to predict mutations was assessed by the area under the receiver operating curve (AUC). Noether’s test was used to determine if the AUCs were significantly from random (AUC=0.50). All p-values were corrected for multiple testing by controlling the false discovery rate (FDR{sub Wilcoxon} and FDR{sub Noether}) of 10%. Results: Eight radiomic features, MTV, and maximum-SUV, were significantly associated with the EGFR mutation (FDR{sub Wilcoxon}=0.01–0.10). However, KRAS+ demonstrated no significantly distinctive imaging features compared to KRAS− (FDR{sub Wilcoxon}≥0.92). EGFR+ and EGFR− were significantly discriminated by conventional PET features (AUC=0.61, FDR{sub Noether}=0.04 for MTV and AUC=0.64, FDR{sub Noether}=0.01 for maximum-SUV). Eight radiomic features were significantly predictive for EGFR+ compared to EGFR− (AUC=0.59–0.67, FDR{sub Noether}=0.0032–0.09). Normalized-inverse-difference-moment outperformed all features in predicting EGFR mutation (AUC=0.67, FDR{sub Noether}=0.0032). Moreover, only the radiomic feature normalized-inverse-difference-moment could significantly predict KRAS+ from EGFR+ (AUC=0.65, FDR{sub Noether}=0.05). All measures failed to predict KRAS+ from KRAS− (AUC=0.50–0.54, FDR{sub Noether}≥0.92). Conclusion: PET imaging features were strongly associated with EGFR mutations in NSCLC. Radiomic features have great potential in predicting EGFR mutations. Our study may help develop a non-invasive imaging biomarker for EGFR mutation. R.M. has consulting interests with Amgen.« less

  9. Association of high proliferation marker Ki-67 expression with DCEMR imaging features of breast: a large scale evaluation

    NASA Astrophysics Data System (ADS)

    Saha, Ashirbani; Harowicz, Michael R.; Grimm, Lars J.; Kim, Connie E.; Ghate, Sujata V.; Walsh, Ruth; Mazurowski, Maciej A.

    2018-02-01

    One of the methods widely used to measure the proliferative activity of cells in breast cancer patients is the immunohistochemical (IHC) measurement of the percentage of cells stained for nuclear antigen Ki-67. Use of Ki-67 expression as a prognostic marker is still under investigation. However, numerous clinical studies have reported an association between a high Ki-67 and overall survival (OS) and disease free survival (DFS). On the other hand, to offer non-invasive alternative in determining Ki-67 expression, researchers have made recent attempts to study the association of Ki-67 expression with magnetic resonance (MR) imaging features of breast cancer in small cohorts (<30). Here, we present a large scale evaluation of the relationship between imaging features and Ki-67 score as: (a) we used a set of 450 invasive breast cancer patients, (b) we extracted a set of 529 imaging features of shape and enhancement from breast, tumor and fibroglandular tissue of the patients, (c) used a subset of patients as the training set to select features and trained a multivariate logistic regression model to predict high versus low Ki-67 values, and (d) we validated the performance of the trained model in an independent test set using the area-under the receiver operating characteristics (ROC) curve (AUC) of the values predicted. Our model was able to predict high versus low Ki-67 in the test set with an AUC of 0.67 (95% CI: 0.58-0.75, p<1.1e-04). Thus, a moderate strength of association of Ki-67 values and MRextracted imaging features was demonstrated in our experiments.

  10. A Feature-based Approach to Big Data Analysis of Medical Images

    PubMed Central

    Toews, Matthew; Wachinger, Christian; Estepar, Raul San Jose; Wells, William M.

    2015-01-01

    This paper proposes an inference method well-suited to large sets of medical images. The method is based upon a framework where distinctive 3D scale-invariant features are indexed efficiently to identify approximate nearest-neighbor (NN) feature matches in O(log N) computational complexity in the number of images N. It thus scales well to large data sets, in contrast to methods based on pair-wise image registration or feature matching requiring O(N) complexity. Our theoretical contribution is a density estimator based on a generative model that generalizes kernel density estimation and K-nearest neighbor (KNN) methods. The estimator can be used for on-the-fly queries, without requiring explicit parametric models or an off-line training phase. The method is validated on a large multi-site data set of 95,000,000 features extracted from 19,000 lung CT scans. Subject-level classification identifies all images of the same subjects across the entire data set despite deformation due to breathing state, including unintentional duplicate scans. State-of-the-art performance is achieved in predicting chronic pulmonary obstructive disorder (COPD) severity across the 5-category GOLD clinical rating, with an accuracy of 89% if both exact and one-off predictions are considered correct. PMID:26221685

  11. A Feature-Based Approach to Big Data Analysis of Medical Images.

    PubMed

    Toews, Matthew; Wachinger, Christian; Estepar, Raul San Jose; Wells, William M

    2015-01-01

    This paper proposes an inference method well-suited to large sets of medical images. The method is based upon a framework where distinctive 3D scale-invariant features are indexed efficiently to identify approximate nearest-neighbor (NN) feature matches-in O (log N) computational complexity in the number of images N. It thus scales well to large data sets, in contrast to methods based on pair-wise image registration or feature matching requiring O(N) complexity. Our theoretical contribution is a density estimator based on a generative model that generalizes kernel density estimation and K-nearest neighbor (KNN) methods.. The estimator can be used for on-the-fly queries, without requiring explicit parametric models or an off-line training phase. The method is validated on a large multi-site data set of 95,000,000 features extracted from 19,000 lung CT scans. Subject-level classification identifies all images of the same subjects across the entire data set despite deformation due to breathing state, including unintentional duplicate scans. State-of-the-art performance is achieved in predicting chronic pulmonary obstructive disorder (COPD) severity across the 5-category GOLD clinical rating, with an accuracy of 89% if both exact and one-off predictions are considered correct.

  12. Predicting Ki67% expression from DCE-MR images of breast tumors using textural kinetic features in tumor habitats

    NASA Astrophysics Data System (ADS)

    Chaudhury, Baishali; Zhou, Mu; Farhidzadeh, Hamidreza; Goldgof, Dmitry B.; Hall, Lawrence O.; Gatenby, Robert A.; Gillies, Robert J.; Weinfurtner, Robert J.; Drukteinis, Jennifer S.

    2016-03-01

    The use of Ki67% expression, a cell proliferation marker, as a predictive and prognostic factor has been widely studied in the literature. Yet its usefulness is limited due to inconsistent cut off scores for Ki67% expression, subjective differences in its assessment in various studies, and spatial variation in expression, which makes it difficult to reproduce as a reliable independent prognostic factor. Previous studies have shown that there are significant spatial variations in Ki67% expression, which may limit its clinical prognostic utility after core biopsy. These variations are most evident when examining the periphery of the tumor vs. the core. To date, prediction of Ki67% expression from quantitative image analysis of DCE-MRI is very limited. This work presents a novel computer aided diagnosis framework to use textural kinetics to (i) predict the ratio of periphery Ki67% expression to core Ki67% expression, and (ii) predict Ki67% expression from individual tumor habitats. The pilot cohort consists of T1 weighted fat saturated DCE-MR images from 17 patients. Support vector regression with a radial basis function was used for predicting the Ki67% expression and ratios. The initial results show that texture features from individual tumor habitats are more predictive of the Ki67% expression ratio and spatial Ki67% expression than features from the whole tumor. The Ki67% expression ratio could be predicted with a root mean square error (RMSE) of 1.67%. Quantitative image analysis of DCE-MRI using textural kinetic habitats, has the potential to be used as a non-invasive method for predicting Ki67 percentage and ratio, thus more accurately reporting high KI-67 expression for patient prognosis.

  13. Changes in quantitative 3D shape features of the optic nerve head associated with age

    NASA Astrophysics Data System (ADS)

    Christopher, Mark; Tang, Li; Fingert, John H.; Scheetz, Todd E.; Abramoff, Michael D.

    2013-02-01

    Optic nerve head (ONH) structure is an important biological feature of the eye used by clinicians to diagnose and monitor progression of diseases such as glaucoma. ONH structure is commonly examined using stereo fundus imaging or optical coherence tomography. Stereo fundus imaging provides stereo views of the ONH that retain 3D information useful for characterizing structure. In order to quantify 3D ONH structure, we applied a stereo correspondence algorithm to a set of stereo fundus images. Using these quantitative 3D ONH structure measurements, eigen structures were derived using principal component analysis from stereo images of 565 subjects from the Ocular Hypertension Treatment Study (OHTS). To evaluate the usefulness of the eigen structures, we explored associations with the demographic variables age, gender, and race. Using regression analysis, the eigen structures were found to have significant (p < 0.05) associations with both age and race after Bonferroni correction. In addition, classifiers were constructed to predict the demographic variables based solely on the eigen structures. These classifiers achieved an area under receiver operating characteristic curve of 0.62 in predicting a binary age variable, 0.52 in predicting gender, and 0.67 in predicting race. The use of objective, quantitative features or eigen structures can reveal hidden relationships between ONH structure and demographics. The use of these features could similarly allow specific aspects of ONH structure to be isolated and associated with the diagnosis of glaucoma, disease progression and outcomes, and genetic factors.

  14. Analysis of hyperspectral scattering images using a moment method for apple firmness prediction

    USDA-ARS?s Scientific Manuscript database

    This article reports on using a moment method to extract features from the hyperspectral scattering profiles for apple fruit firmness prediction. Hyperspectral scattering images between 500 nm and 1000 nm were acquired online, using a hyperspectral scattering system, for ‘Golden Delicious’, ’Jonagol...

  15. Modeling resident error-making patterns in detection of mammographic masses using computer-extracted image features: preliminary experiments

    NASA Astrophysics Data System (ADS)

    Mazurowski, Maciej A.; Zhang, Jing; Lo, Joseph Y.; Kuzmiak, Cherie M.; Ghate, Sujata V.; Yoon, Sora

    2014-03-01

    Providing high quality mammography education to radiology trainees is essential, as good interpretation skills potentially ensure the highest benefit of screening mammography for patients. We have previously proposed a computer-aided education system that utilizes trainee models, which relate human-assessed image characteristics to interpretation error. We proposed that these models be used to identify the most difficult and therefore the most educationally useful cases for each trainee. In this study, as a next step in our research, we propose to build trainee models that utilize features that are automatically extracted from images using computer vision algorithms. To predict error, we used a logistic regression which accepts imaging features as input and returns error as output. Reader data from 3 experts and 3 trainees were used. Receiver operating characteristic analysis was applied to evaluate the proposed trainee models. Our experiments showed that, for three trainees, our models were able to predict error better than chance. This is an important step in the development of adaptive computer-aided education systems since computer-extracted features will allow for faster and more extensive search of imaging databases in order to identify the most educationally beneficial cases.

  16. SU-E-J-270: Repeated 18F-FDG PET/CTs Based Feature Analysis for the Predication of Anal Cancer Recurrence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, J; Chuong, M; Choi, W

    Purpose: To identify PET/CT based imaging predictors of anal cancer recurrence and evaluate baseline vs. mid-treatment vs. post-treatment PET/CT scans in the tumor recurrence prediction. Methods: FDG-PET/CT scans were obtained at baseline, during chemoradiotherapy (CRT, midtreatment), and after CRT (post-treatment) in 17 patients of anal cancer. Four patients had tumor recurrence. For each patient, the mid-treatment and post-treatment scans were respectively aligned to the baseline scan by a rigid registration followed by a deformable registration. PET/CT image features were computed within the manually delineated tumor volume of each scan to characterize the intensity histogram, spatial patterns (texture), and shape ofmore » the tumors, as well as the changes of these features resulting from CRT. A total of 335 image features were extracted. An Exact Logistic Regression model was employed to analyze these PET/CT image features in order to identify potential predictors for tumor recurrence. Results: Eleven potential predictors of cancer recurrence were identified with p < 0.10, including five shape features, five statistical texture features, and one CT intensity histogram feature. Six features were indentified from posttreatment scans, 3 from mid-treatment scans, and 2 from baseline scans. These features indicated that there were differences in shape, intensity, and spatial pattern between tumors with and without recurrence. Recurrent tumors tended to have more compact shape (higher roundness and lower elongation) and larger intensity difference between baseline and follow-up scans, compared to non-recurrent tumors. Conclusion: PET/CT based anal cancer recurrence predictors were identified. The post-CRT PET/CT is the most important scan for the prediction of cancer recurrence. The baseline and mid-CRT PET/CT also showed value in the prediction and would be more useful for the predication of tumor recurrence in early stage of CRT. This work was supported in part by the National Cancer Institute Grant R01CA172638.« less

  17. TH-CD-207A-07: Prediction of High Dimensional State Subject to Respiratory Motion: A Manifold Learning Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, W; Sawant, A; Ruan, D

    Purpose: The development of high dimensional imaging systems (e.g. volumetric MRI, CBCT, photogrammetry systems) in image-guided radiotherapy provides important pathways to the ultimate goal of real-time volumetric/surface motion monitoring. This study aims to develop a prediction method for the high dimensional state subject to respiratory motion. Compared to conventional linear dimension reduction based approaches, our method utilizes manifold learning to construct a descriptive feature submanifold, where more efficient and accurate prediction can be performed. Methods: We developed a prediction framework for high-dimensional state subject to respiratory motion. The proposed method performs dimension reduction in a nonlinear setting to permit moremore » descriptive features compared to its linear counterparts (e.g., classic PCA). Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where low-dimensional prediction is performed. A fixed-point iterative pre-image estimation method is applied subsequently to recover the predicted value in the original state space. We evaluated and compared the proposed method with PCA-based method on 200 level-set surfaces reconstructed from surface point clouds captured by the VisionRT system. The prediction accuracy was evaluated with respect to root-mean-squared-error (RMSE) for both 200ms and 600ms lookahead lengths. Results: The proposed method outperformed PCA-based approach with statistically higher prediction accuracy. In one-dimensional feature subspace, our method achieved mean prediction accuracy of 0.86mm and 0.89mm for 200ms and 600ms lookahead lengths respectively, compared to 0.95mm and 1.04mm from PCA-based method. The paired t-tests further demonstrated the statistical significance of the superiority of our method, with p-values of 6.33e-3 and 5.78e-5, respectively. Conclusion: The proposed approach benefits from the descriptiveness of a nonlinear manifold and the prediction reliability in such low dimensional manifold. The fixed-point iterative approach turns out to work well practically for the pre-image recovery. Our approach is particularly suitable to facilitate managing respiratory motion in image-guide radiotherapy. This work is supported in part by NIH grant R01 CA169102-02.« less

  18. Accuracy of Ultrasonography and Magnetic Resonance Imaging in the Diagnosis of Placenta Accreta

    PubMed Central

    Riteau, Anne-Sophie; Tassin, Mikael; Chambon, Guillemette; Le Vaillant, Claudine; de Laveaucoupet, Jocelyne; Quéré, Marie-Pierre; Joubert, Madeleine; Prevot, Sophie; Philippe, Henri-Jean; Benachi, Alexandra

    2014-01-01

    Purpose To evaluate the accuracy of ultrasonography and magnetic resonance imaging (MRI) in the diagnosis of placenta accreta and to define the most relevant specific ultrasound and MRI features that may predict placental invasion. Material and Methods This study was approved by the institutional review board of the French College of Obstetricians and Gynecologists. We retrospectively reviewed the medical records of all patients referred for suspected placenta accreta to two university hospitals from 01/2001 to 05/2012. Our study population included 42 pregnant women who had been investigated by both ultrasonography and MRI. Ultrasound images and MRI were blindly reassessed for each case by 2 raters in order to score features that predict abnormal placental invasion. Results Sensitivity in the diagnosis of placenta accreta was 100% with ultrasound and 76.9% for MRI (P = 0.03). Specificity was 37.5% with ultrasonography and 50% for MRI (P = 0.6). The features of greatest sensitivity on ultrasonography were intraplacental lacunae and loss of the normal retroplacental clear space. Increased vascularization in the uterine serosa-bladder wall interface and vascularization perpendicular to the uterine wall had the best positive predictive value (92%). At MRI, uterine bulging had the best positive predictive value (85%) and its combination with the presence of dark intraplacental bands on T2-weighted images improved the predictive value to 90%. Conclusion Ultrasound imaging is the mainstay of screening for placenta accreta. MRI appears to be complementary to ultrasonography, especially when there are few ultrasound signs. PMID:24733409

  19. Health Communication in Social Media: Message Features Predicting User Engagement on Diabetes-Related Facebook Pages.

    PubMed

    Rus, Holly M; Cameron, Linda D

    2016-10-01

    Social media provides unprecedented opportunities for enhancing health communication and health care, including self-management of chronic conditions such as diabetes. Creating messages that engage users is critical for enhancing message impact and dissemination. This study analyzed health communications within ten diabetes-related Facebook pages to identify message features predictive of user engagement. The Common-Sense Model of Illness Self-Regulation and established health communication techniques guided content analyses of 500 Facebook posts. Each post was coded for message features predicted to engage users and numbers of likes, shares, and comments during the week following posting. Multi-level, negative binomial regressions revealed that specific features predicted different forms of engagement. Imagery emerged as a strong predictor; messages with images had higher rates of liking and sharing relative to messages without images. Diabetes consequence information and positive identity predicted higher sharing while negative affect, social support, and crowdsourcing predicted higher commenting. Negative affect, crowdsourcing, and use of external links predicted lower sharing while positive identity predicted lower commenting. The presence of imagery weakened or reversed the positive relationships of several message features with engagement. Diabetes control information and negative affect predicted more likes in text-only messages, but fewer likes when these messages included illustrative imagery. Similar patterns of imagery's attenuating effects emerged for the positive relationships of consequence information, control information, and positive identity with shares and for positive relationships of negative affect and social support with comments. These findings hold promise for guiding communication design in health-related social media.

  20. Natural texture retrieval based on perceptual similarity measurement

    NASA Astrophysics Data System (ADS)

    Gao, Ying; Dong, Junyu; Lou, Jianwen; Qi, Lin; Liu, Jun

    2018-04-01

    A typical texture retrieval system performs feature comparison and might not be able to make human-like judgments of image similarity. Meanwhile, it is commonly known that perceptual texture similarity is difficult to be described by traditional image features. In this paper, we propose a new texture retrieval scheme based on texture perceptual similarity. The key of the proposed scheme is that prediction of perceptual similarity is performed by learning a non-linear mapping from image features space to perceptual texture space by using Random Forest. We test the method on natural texture dataset and apply it on a new wallpapers dataset. Experimental results demonstrate that the proposed texture retrieval scheme with perceptual similarity improves the retrieval performance over traditional image features.

  1. Predictive modeling of outcomes following definitive chemoradiotherapy for oropharyngeal cancer based on FDG-PET image characteristics

    NASA Astrophysics Data System (ADS)

    Folkert, Michael R.; Setton, Jeremy; Apte, Aditya P.; Grkovski, Milan; Young, Robert J.; Schöder, Heiko; Thorstad, Wade L.; Lee, Nancy Y.; Deasy, Joseph O.; Oh, Jung Hun

    2017-07-01

    In this study, we investigate the use of imaging feature-based outcomes research (‘radiomics’) combined with machine learning techniques to develop robust predictive models for the risk of all-cause mortality (ACM), local failure (LF), and distant metastasis (DM) following definitive chemoradiation therapy (CRT). One hundred seventy four patients with stage III-IV oropharyngeal cancer (OC) treated at our institution with CRT with retrievable pre- and post-treatment 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) scans were identified. From pre-treatment PET scans, 24 representative imaging features of FDG-avid disease regions were extracted. Using machine learning-based feature selection methods, multiparameter logistic regression models were built incorporating clinical factors and imaging features. All model building methods were tested by cross validation to avoid overfitting, and final outcome models were validated on an independent dataset from a collaborating institution. Multiparameter models were statistically significant on 5 fold cross validation with the area under the receiver operating characteristic curve (AUC)  =  0.65 (p  =  0.004), 0.73 (p  =  0.026), and 0.66 (p  =  0.015) for ACM, LF, and DM, respectively. The model for LF retained significance on the independent validation cohort with AUC  =  0.68 (p  =  0.029) whereas the models for ACM and DM did not reach statistical significance, but resulted in comparable predictive power to the 5 fold cross validation with AUC  =  0.60 (p  =  0.092) and 0.65 (p  =  0.062), respectively. In the largest study of its kind to date, predictive features including increasing metabolic tumor volume, increasing image heterogeneity, and increasing tumor surface irregularity significantly correlated to mortality, LF, and DM on 5 fold cross validation in a relatively uniform single-institution cohort. The LF model also retained significance in an independent population.

  2. Deep supervised dictionary learning for no-reference image quality assessment

    NASA Astrophysics Data System (ADS)

    Huang, Yuge; Liu, Xuesong; Tian, Xiang; Zhou, Fan; Chen, Yaowu; Jiang, Rongxin

    2018-03-01

    We propose a deep convolutional neural network (CNN) for general no-reference image quality assessment (NR-IQA), i.e., accurate prediction of image quality without a reference image. The proposed model consists of three components such as a local feature extractor that is a fully CNN, an encoding module with an inherent dictionary that aggregates local features to output a fixed-length global quality-aware image representation, and a regression module that maps the representation to an image quality score. Our model can be trained in an end-to-end manner, and all of the parameters, including the weights of the convolutional layers, the dictionary, and the regression weights, are simultaneously learned from the loss function. In addition, the model can predict quality scores for input images of arbitrary sizes in a single step. We tested our method on commonly used image quality databases and showed that its performance is comparable with that of state-of-the-art general-purpose NR-IQA algorithms.

  3. A feature-based developmental model of the infant brain in structural MRI.

    PubMed

    Toews, Matthew; Wells, William M; Zöllei, Lilla

    2012-01-01

    In this paper, anatomical development is modeled as a collection of distinctive image patterns localized in space and time. A Bayesian posterior probability is defined over a random variable of subject age, conditioned on data in the form of scale-invariant image features. The model is automatically learned from a large set of images exhibiting significant variation, used to discover anatomical structure related to age and development, and fit to new images to predict age. The model is applied to a set of 230 infant structural MRIs of 92 subjects acquired at multiple sites over an age range of 8-590 days. Experiments demonstrate that the model can be used to identify age-related anatomical structure, and to predict the age of new subjects with an average error of 72 days.

  4. Many local pattern texture features: which is better for image-based multilabel human protein subcellular localization classification?

    PubMed

    Yang, Fan; Xu, Ying-Ying; Shen, Hong-Bin

    2014-01-01

    Human protein subcellular location prediction can provide critical knowledge for understanding a protein's function. Since significant progress has been made on digital microscopy, automated image-based protein subcellular location classification is urgently needed. In this paper, we aim to investigate more representative image features that can be effectively used for dealing with the multilabel subcellular image samples. We prepared a large multilabel immunohistochemistry (IHC) image benchmark from the Human Protein Atlas database and tested the performance of different local texture features, including completed local binary pattern, local tetra pattern, and the standard local binary pattern feature. According to our experimental results from binary relevance multilabel machine learning models, the completed local binary pattern, and local tetra pattern are more discriminative for describing IHC images when compared to the traditional local binary pattern descriptor. The combination of these two novel local pattern features and the conventional global texture features is also studied. The enhanced performance of final binary relevance classification model trained on the combined feature space demonstrates that different features are complementary to each other and thus capable of improving the accuracy of classification.

  5. Impact of experimental design on PET radiomics in predicting somatic mutation status.

    PubMed

    Yip, Stephen S F; Parmar, Chintan; Kim, John; Huynh, Elizabeth; Mak, Raymond H; Aerts, Hugo J W L

    2017-12-01

    PET-based radiomic features have demonstrated great promises in predicting genetic data. However, various experimental parameters can influence the feature extraction pipeline, and hence, Here, we investigated how experimental settings affect the performance of radiomic features in predicting somatic mutation status in non-small cell lung cancer (NSCLC) patients. 348 NSCLC patients with somatic mutation testing and diagnostic PET images were included in our analysis. Radiomic feature extractions were analyzed for varying voxel sizes, filters and bin widths. 66 radiomic features were evaluated. The performance of features in predicting mutations status was assessed using the area under the receiver-operating-characteristic curve (AUC). The influence of experimental parameters on feature predictability was quantified as the relative difference between the minimum and maximum AUC (δ). The large majority of features (n=56, 85%) were significantly predictive for EGFR mutation status (AUC≥0.61). 29 radiomic features significantly predicted EGFR mutations and were robust to experimental settings with δ Overall <5%. The overall influence (δ Overall ) of the voxel size, filter and bin width for all features ranged from 5% to 15%, respectively. For all features, none of the experimental designs was predictive of KRAS+ from KRAS- (AUC≤0.56). The predictability of 29 radiomic features was robust to the choice of experimental settings; however, these settings need to be carefully chosen for all other features. The combined effect of the investigated processing methods could be substantial and must be considered. Optimized settings that will maximize the predictive performance of individual radiomic features should be investigated in the future. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Using connectome-based predictive modeling to predict individual behavior from brain connectivity

    PubMed Central

    Shen, Xilin; Finn, Emily S.; Scheinost, Dustin; Rosenberg, Monica D.; Chun, Marvin M.; Papademetris, Xenophon; Constable, R Todd

    2017-01-01

    Neuroimaging is a fast developing research area where anatomical and functional images of human brains are collected using techniques such as functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), and electroencephalography (EEG). Technical advances and large-scale datasets have allowed for the development of models capable of predicting individual differences in traits and behavior using brain connectivity measures derived from neuroimaging data. Here, we present connectome-based predictive modeling (CPM), a data-driven protocol for developing predictive models of brain-behavior relationships from connectivity data using cross-validation. This protocol includes the following steps: 1) feature selection, 2) feature summarization, 3) model building, and 4) assessment of prediction significance. We also include suggestions for visualizing the most predictive features (i.e., brain connections). The final result should be a generalizable model that takes brain connectivity data as input and generates predictions of behavioral measures in novel subjects, accounting for a significant amount of the variance in these measures. It has been demonstrated that the CPM protocol performs equivalently or better than most of the existing approaches in brain-behavior prediction. However, because CPM focuses on linear modeling and a purely data-driven driven approach, neuroscientists with limited or no experience in machine learning or optimization would find it easy to implement the protocols. Depending on the volume of data to be processed, the protocol can take 10–100 minutes for model building, 1–48 hours for permutation testing, and 10–20 minutes for visualization of results. PMID:28182017

  7. Applying a new computer-aided detection scheme generated imaging marker to predict short-term breast cancer risk

    NASA Astrophysics Data System (ADS)

    Mirniaharikandehei, Seyedehnafiseh; Hollingsworth, Alan B.; Patel, Bhavika; Heidari, Morteza; Liu, Hong; Zheng, Bin

    2018-05-01

    This study aims to investigate the feasibility of identifying a new quantitative imaging marker based on false-positives generated by a computer-aided detection (CAD) scheme to help predict short-term breast cancer risk. An image dataset including four view mammograms acquired from 1044 women was retrospectively assembled. All mammograms were originally interpreted as negative by radiologists. In the next subsequent mammography screening, 402 women were diagnosed with breast cancer and 642 remained negative. An existing CAD scheme was applied ‘as is’ to process each image. From CAD-generated results, four detection features including the total number of (1) initial detection seeds and (2) the final detected false-positive regions, (3) average and (4) sum of detection scores, were computed from each image. Then, by combining the features computed from two bilateral images of left and right breasts from either craniocaudal or mediolateral oblique view, two logistic regression models were trained and tested using a leave-one-case-out cross-validation method to predict the likelihood of each testing case being positive in the next subsequent screening. The new prediction model yielded the maximum prediction accuracy with an area under a ROC curve of AUC  =  0.65  ±  0.017 and the maximum adjusted odds ratio of 4.49 with a 95% confidence interval of (2.95, 6.83). The results also showed an increasing trend in the adjusted odds ratio and risk prediction scores (p  <  0.01). Thus, this study demonstrated that CAD-generated false-positives might include valuable information, which needs to be further explored for identifying and/or developing more effective imaging markers for predicting short-term breast cancer risk.

  8. SU-E-J-256: Predicting Metastasis-Free Survival of Rectal Cancer Patients Treated with Neoadjuvant Chemo-Radiotherapy by Data-Mining of CT Texture Features of Primary Lesions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, H; Wang, J; Shen, L

    Purpose: The purpose of this study is to investigate the relationship between computed tomographic (CT) texture features of primary lesions and metastasis-free survival for rectal cancer patients; and to develop a datamining prediction model using texture features. Methods: A total of 220 rectal cancer patients treated with neoadjuvant chemo-radiotherapy (CRT) were enrolled in this study. All patients underwent CT scans before CRT. The primary lesions on the CT images were delineated by two experienced oncologists. The CT images were filtered by Laplacian of Gaussian (LoG) filters with different filter values (1.0–2.5: from fine to coarse). Both filtered and unfiltered imagesmore » were analyzed using Gray-level Co-occurrence Matrix (GLCM) texture analysis with different directions (transversal, sagittal, and coronal). Totally, 270 texture features with different species, directions and filter values were extracted. Texture features were examined with Student’s t-test for selecting predictive features. Principal Component Analysis (PCA) was performed upon the selected features to reduce the feature collinearity. Artificial neural network (ANN) and logistic regression were applied to establish metastasis prediction models. Results: Forty-six of 220 patients developed metastasis with a follow-up time of more than 2 years. Sixtyseven texture features were significantly different in t-test (p<0.05) between patients with and without metastasis, and 12 of them were extremely significant (p<0.001). The Area-under-the-curve (AUC) of ANN was 0.72, and the concordance index (CI) of logistic regression was 0.71. The predictability of ANN was slightly better than logistic regression. Conclusion: CT texture features of primary lesions are related to metastasisfree survival of rectal cancer patients. Both ANN and logistic regression based models can be developed for prediction.« less

  9. Deep feature classification of angiomyolipoma without visible fat and renal cell carcinoma in abdominal contrast-enhanced CT images with texture image patches and hand-crafted feature concatenation.

    PubMed

    Lee, Hansang; Hong, Helen; Kim, Junmo; Jung, Dae Chul

    2018-04-01

    To develop an automatic deep feature classification (DFC) method for distinguishing benign angiomyolipoma without visible fat (AMLwvf) from malignant clear cell renal cell carcinoma (ccRCC) from abdominal contrast-enhanced computer tomography (CE CT) images. A dataset including 80 abdominal CT images of 39 AMLwvf and 41 ccRCC patients was used. We proposed a DFC method for differentiating the small renal masses (SRM) into AMLwvf and ccRCC using the combination of hand-crafted and deep features, and machine learning classifiers. First, 71-dimensional hand-crafted features (HCF) of texture and shape were extracted from the SRM contours. Second, 1000-4000-dimensional deep features (DF) were extracted from the ImageNet pretrained deep learning model with the SRM image patches. In DF extraction, we proposed the texture image patches (TIP) to emphasize the texture information inside the mass in DFs and reduce the mass size variability. Finally, the two features were concatenated and the random forest (RF) classifier was trained on these concatenated features to classify the types of SRMs. The proposed method was tested on our dataset using leave-one-out cross-validation and evaluated using accuracy, sensitivity, specificity, positive predictive values (PPV), negative predictive values (NPV), and area under receiver operating characteristics curve (AUC). In experiments, the combinations of four deep learning models, AlexNet, VGGNet, GoogleNet, and ResNet, and four input image patches, including original, masked, mass-size, and texture image patches, were compared and analyzed. In qualitative evaluation, we observed the change in feature distributions between the proposed and comparative methods using tSNE method. In quantitative evaluation, we evaluated and compared the classification results, and observed that (a) the proposed HCF + DF outperformed HCF-only and DF-only, (b) AlexNet showed generally the best performances among the CNN models, and (c) the proposed TIPs not only achieved the competitive performances among the input patches, but also steady performance regardless of CNN models. As a result, the proposed method achieved the accuracy of 76.6 ± 1.4% for the proposed HCF + DF with AlexNet and TIPs, which improved the accuracy by 6.6%p and 8.3%p compared to HCF-only and DF-only, respectively. The proposed shape features and TIPs improved the HCFs and DFs, respectively, and the feature concatenation further enhanced the quality of features for differentiating AMLwvf from ccRCC in abdominal CE CT images. © 2018 American Association of Physicists in Medicine.

  10. Radiomics-based features for pattern recognition of lung cancer histopathology and metastases.

    PubMed

    Ferreira Junior, José Raniery; Koenigkam-Santos, Marcel; Cipriano, Federico Enrique Garcia; Fabro, Alexandre Todorovic; Azevedo-Marques, Paulo Mazzoncini de

    2018-06-01

    lung cancer is the leading cause of cancer-related deaths in the world, and its poor prognosis varies markedly according to tumor staging. Computed tomography (CT) is the imaging modality of choice for lung cancer evaluation, being used for diagnosis and clinical staging. Besides tumor stage, other features, like histopathological subtype, can also add prognostic information. In this work, radiomics-based CT features were used to predict lung cancer histopathology and metastases using machine learning models. local image datasets of confirmed primary malignant pulmonary tumors were retrospectively evaluated for testing and validation. CT images acquired with same protocol were semiautomatically segmented. Tumors were characterized by clinical features and computer attributes of intensity, histogram, texture, shape, and volume. Three machine learning classifiers used up to 100 selected features to perform the analysis. radiomics-based features yielded areas under the receiver operating characteristic curve of 0.89, 0.97, and 0.92 at testing and 0.75, 0.71, and 0.81 at validation for lymph nodal metastasis, distant metastasis, and histopathology pattern recognition, respectively. the radiomics characterization approach presented great potential to be used in a computational model to aid lung cancer histopathological subtype diagnosis as a "virtual biopsy" and metastatic prediction for therapy decision support without the necessity of a whole-body imaging scanning. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Machine learning approaches for integrating clinical and imaging features in late-life depression classification and response prediction.

    PubMed

    Patel, Meenal J; Andreescu, Carmen; Price, Julie C; Edelman, Kathryn L; Reynolds, Charles F; Aizenstein, Howard J

    2015-10-01

    Currently, depression diagnosis relies primarily on behavioral symptoms and signs, and treatment is guided by trial and error instead of evaluating associated underlying brain characteristics. Unlike past studies, we attempted to estimate accurate prediction models for late-life depression diagnosis and treatment response using multiple machine learning methods with inputs of multi-modal imaging and non-imaging whole brain and network-based features. Late-life depression patients (medicated post-recruitment) (n = 33) and older non-depressed individuals (n = 35) were recruited. Their demographics and cognitive ability scores were recorded, and brain characteristics were acquired using multi-modal magnetic resonance imaging pretreatment. Linear and nonlinear learning methods were tested for estimating accurate prediction models. A learning method called alternating decision trees estimated the most accurate prediction models for late-life depression diagnosis (87.27% accuracy) and treatment response (89.47% accuracy). The diagnosis model included measures of age, Mini-mental state examination score, and structural imaging (e.g. whole brain atrophy and global white mater hyperintensity burden). The treatment response model included measures of structural and functional connectivity. Combinations of multi-modal imaging and/or non-imaging measures may help better predict late-life depression diagnosis and treatment response. As a preliminary observation, we speculate that the results may also suggest that different underlying brain characteristics defined by multi-modal imaging measures-rather than region-based differences-are associated with depression versus depression recovery because to our knowledge this is the first depression study to accurately predict both using the same approach. These findings may help better understand late-life depression and identify preliminary steps toward personalized late-life depression treatment. Copyright © 2015 John Wiley & Sons, Ltd.

  12. MR imaging features associated with distant metastasis-free survival of patients with invasive breast cancer: a case-control study.

    PubMed

    Song, Sung Eun; Shin, Sung Ui; Moon, Hyeong-Gon; Ryu, Han Suk; Kim, Kwangsoo; Moon, Woo Kyung

    2017-04-01

    Preoperative breast magnetic resonance (MR) imaging features of primary breast cancers may have the potential to act as prognostic biomarkers by providing morphologic and kinetic features representing inter- or intra-tumor heterogeneity. Recent radiogenomic studies reveal that several radiologist-annotated image features are associated with genes or signal pathways involved in tumor progression, treatment resistance, and distant metastasis (DM). We investigate whether preoperative breast MR imaging features are associated with worse DM-free survival in patients with invasive breast cancer. Of the 3536 patients with primary breast cancers who underwent preoperative MR imaging between 2003 and 2009, 147 patients with DM were identified and one-to-one matched with control patients (n = 147) without DM according to clinical-pathologic variables. Three radiologists independently reviewed the MR images of 294 patients, and the association of DM-free survival with MR imaging and clinical-pathologic features was assessed using Cox proportional hazard models. Of MR imaging features, rim enhancement (hazard ratio [HR], 1.83 [95% confidence interval, CI 1.29, 2.51]; p = 0.001) and peritumoral edema (HR, 1.48 [95% CI 1.03, 2.11]; p = 0.032) were the significant features associated with worse DM-free survival. The significant MR imaging features, however, were different between breast cancer subtypes and stages. Preoperative breast MR imaging features of rim enhancement and peritumoral edema may be used as prognostic biomarkers that help predict DM risk in patients with breast cancer, thereby potentially enabling improved personalized treatment and monitoring strategies for individual patients.

  13. Effectiveness of evaluating tumor vascularization using 3D power Doppler ultrasound with high-definition flow technology in the prediction of the response to neoadjuvant chemotherapy for T2 breast cancer: a preliminary report

    NASA Astrophysics Data System (ADS)

    Shia, Wei-Chung; Chen, Dar-Ren; Huang, Yu-Len; Wu, Hwa-Koon; Kuo, Shou-Jen

    2015-10-01

    The aim of this study was to evaluate the effectiveness of advanced ultrasound (US) imaging of vascular flow and morphological features in the prediction of a pathologic complete response (pCR) and a partial response (PR) to neoadjuvant chemotherapy for T2 breast cancer. Twenty-nine consecutive patients with T2 breast cancer treated with six courses of anthracycline-based neoadjuvant chemotherapy were enrolled. Three-dimensional (3D) power Doppler US with high-definition flow (HDF) technology was used to investigate the blood flow in and morphological features of the tumors. Six vascularity quantization features, three morphological features, and two vascular direction features were selected and extracted from the US images. A support vector machine was used to evaluate the changes in vascularity after neoadjuvant chemotherapy, and pCR and PR were predicted on the basis of these changes. The most accurate prediction of pCR was achieved after the first chemotherapy cycle, with an accuracy of 93.1% and a specificity of 85.5%, while that of a PR was achieved after the second cycle, with an accuracy of 79.31% and a specificity of 72.22%. Vascularity data can be useful to predict the effects of neoadjuvant chemotherapy. Determination of changes in vascularity after neoadjuvant chemotherapy using 3D power Doppler US with HDF can generate accurate predictions of the patient response, facilitating early decision-making.

  14. The Value of 5-Aminolevulinic Acid in Low-grade Gliomas and High-grade Gliomas Lacking Glioblastoma Imaging Features: An Analysis Based on Fluorescence, Magnetic Resonance Imaging, 18F-Fluoroethyl Tyrosine Positron Emission Tomography, and Tumor Molecular Factors.

    PubMed

    Jaber, Mohammed; Wölfer, Johannes; Ewelt, Christian; Holling, Markus; Hasselblatt, Martin; Niederstadt, Thomas; Zoubi, Tarek; Weckesser, Matthias; Stummer, Walter

    2016-03-01

    Approximately 20% of grade II and most grade III gliomas fluoresce after 5-aminolevulinic acid (5-ALA) application. Conversely, approximately 30% of nonenhancing gliomas are actually high grade. The aim of this study was to identify preoperative factors (ie, age, enhancement, 18F-fluoroethyl tyrosine positron emission tomography [F-FET PET] uptake ratios) for predicting fluorescence in gliomas without typical glioblastomas imaging features and to determine whether fluorescence will allow prediction of tumor grade or molecular characteristics. Patients harboring gliomas without typical glioblastoma imaging features were given 5-ALA. Fluorescence was recorded intraoperatively, and biopsy specimens collected from fluorescing tissue. World Health Organization (WHO) grade, Ki-67/MIB-1 index, IDH1 (R132H) mutation status, O-methylguanine DNA methyltransferase (MGMT) promoter methylation status, and 1p/19q co-deletion status were assessed. Predictive factors for fluorescence were derived from preoperative magnetic resonance imaging and F-FET PET. Classification and regression tree analysis and receiver-operating-characteristic curves were generated for defining predictors. Of 166 tumors, 82 were diagnosed as WHO grade II, 76 as grade III, and 8 as glioblastomas grade IV. Contrast enhancement, tumor volume, and F-FET PET uptake ratio >1.85 predicted fluorescence. Fluorescence correlated with WHO grade (P < .001) and Ki-67/MIB-1 index (P < .001), but not with MGMT promoter methylation status, IDH1 mutation status, or 1p19q co-deletion status. The Ki-67/MIB-1 index in fluorescing grade III gliomas was higher than in nonfluorescing tumors, whereas in fluorescing and nonfluorescing grade II tumors, no differences were noted. Age, tumor volume, and F-FET PET uptake are factors predicting 5-ALA-induced fluorescence in gliomas without typical glioblastoma imaging features. Fluorescence was associated with an increased Ki-67/MIB-1 index and high-grade pathology. Whether fluorescence in grade II gliomas identifies a subtype with worse prognosis remains to be determined.

  15. Training a cell-level classifier for detecting basal-cell carcinoma by combining human visual attention maps with low-level handcrafted features

    PubMed Central

    Corredor, Germán; Whitney, Jon; Arias, Viviana; Madabhushi, Anant; Romero, Eduardo

    2017-01-01

    Abstract. Computational histomorphometric approaches typically use low-level image features for building machine learning classifiers. However, these approaches usually ignore high-level expert knowledge. A computational model (M_im) combines low-, mid-, and high-level image information to predict the likelihood of cancer in whole slide images. Handcrafted low- and mid-level features are computed from area, color, and spatial nuclei distributions. High-level information is implicitly captured from the recorded navigations of pathologists while exploring whole slide images during diagnostic tasks. This model was validated by predicting the presence of cancer in a set of unseen fields of view. The available database was composed of 24 cases of basal-cell carcinoma, from which 17 served to estimate the model parameters and the remaining 7 comprised the evaluation set. A total of 274 fields of view of size 1024×1024  pixels were extracted from the evaluation set. Then 176 patches from this set were used to train a support vector machine classifier to predict the presence of cancer on a patch-by-patch basis while the remaining 98 image patches were used for independent testing, ensuring that the training and test sets do not comprise patches from the same patient. A baseline model (M_ex) estimated the cancer likelihood for each of the image patches. M_ex uses the same visual features as M_im, but its weights are estimated from nuclei manually labeled as cancerous or noncancerous by a pathologist. M_im achieved an accuracy of 74.49% and an F-measure of 80.31%, while M_ex yielded corresponding accuracy and F-measures of 73.47% and 77.97%, respectively. PMID:28382314

  16. Beef quality grading using machine vision

    NASA Astrophysics Data System (ADS)

    Jeyamkondan, S.; Ray, N.; Kranzler, Glenn A.; Biju, Nisha

    2000-12-01

    A video image analysis system was developed to support automation of beef quality grading. Forty images of ribeye steaks were acquired. Fat and lean meat were differentiated using a fuzzy c-means clustering algorithm. Muscle longissimus dorsi (l.d.) was segmented from the ribeye using morphological operations. At the end of each iteration of erosion and dilation, a convex hull was fitted to the image and compactness was measured. The number of iterations was selected to yield the most compact l.d. Match between the l.d. muscle traced by an expert grader and that segmented by the program was 95.9%. Marbling and color features were extracted from the l.d. muscle and were used to build regression models to predict marbling and color scores. Quality grade was predicted using another regression model incorporating all features. Grades predicted by the model were statistically equivalent to the grades assigned by expert graders.

  17. SU-F-R-46: Predicting Distant Failure in Lung SBRT Using Multi-Objective Radiomics Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Z; Folkert, M; Iyengar, P

    2016-06-15

    Purpose: To predict distant failure in lung stereotactic body radiation therapy (SBRT) in early stage non-small cell lung cancer (NSCLC) by using a new multi-objective radiomics model. Methods: Currently, most available radiomics models use the overall accuracy as the objective function. However, due to data imbalance, a single object may not reflect the performance of a predictive model. Therefore, we developed a multi-objective radiomics model which considers both sensitivity and specificity as the objective functions simultaneously. The new model is used to predict distant failure in lung SBRT using 52 patients treated at our institute. Quantitative imaging features of PETmore » and CT as well as clinical parameters are utilized to build the predictive model. Image features include intensity features (9), textural features (12) and geometric features (8). Clinical parameters for each patient include demographic parameters (4), tumor characteristics (8), treatment faction schemes (4) and pretreatment medicines (6). The modelling procedure consists of two steps: extracting features from segmented tumors in PET and CT; and selecting features and training model parameters based on multi-objective. Support Vector Machine (SVM) is used as the predictive model, while a nondominated sorting-based multi-objective evolutionary computation algorithm II (NSGA-II) is used for solving the multi-objective optimization. Results: The accuracy for PET, clinical, CT, PET+clinical, PET+CT, CT+clinical, PET+CT+clinical are 71.15%, 84.62%, 84.62%, 85.54%, 82.69%, 84.62%, 86.54%, respectively. The sensitivities for the above seven combinations are 41.76%, 58.33%, 50.00%, 50.00%, 41.67%, 41.67%, 58.33%, while the specificities are 80.00%, 92.50%, 90.00%, 97.50%, 92.50%, 97.50%, 97.50%. Conclusion: A new multi-objective radiomics model for predicting distant failure in NSCLC treated with SBRT was developed. The experimental results show that the best performance can be obtained by combining all features.« less

  18. Predictive features of breast cancer on Mexican screening mammography patients

    NASA Astrophysics Data System (ADS)

    Rodriguez-Rojas, Juan; Garza-Montemayor, Margarita; Trevino-Alvarado, Victor; Tamez-Pena, José Gerardo

    2013-02-01

    Breast cancer is the most common type of cancer worldwide. In response, breast cancer screening programs are becoming common around the world and public programs now serve millions of women worldwide. These programs are expensive, requiring many specialized radiologists to examine all images. Nevertheless, there is a lack of trained radiologists in many countries as in Mexico, which is a barrier towards decreasing breast cancer mortality, pointing at the need of a triaging system that prioritizes high risk cases for prompt interpretation. Therefore we explored in an image database of Mexican patients whether high risk cases can be distinguished using image features. We collected a set of 200 digital screening mammography cases from a hospital in Mexico, and assigned low or high risk labels according to its BIRADS score. Breast tissue segmentation was performed using an automatic procedure. Image features were obtained considering only the segmented region on each view and comparing the bilateral di erences of the obtained features. Predictive combinations of features were chosen using a genetic algorithms based feature selection procedure. The best model found was able to classify low-risk and high-risk cases with an area under the ROC curve of 0.88 on a 150-fold cross-validation test. The features selected were associated to the differences of signal distribution and tissue shape on bilateral views. The model found can be used to automatically identify high risk cases and trigger the necessary measures to provide prompt treatment.

  19. Investigating the Link Between Radiologists Gaze, Diagnostic Decision, and Image Content

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tourassi, Georgia; Voisin, Sophie; Paquit, Vincent C

    2013-01-01

    Objective: To investigate machine learning for linking image content, human perception, cognition, and error in the diagnostic interpretation of mammograms. Methods: Gaze data and diagnostic decisions were collected from six radiologists who reviewed 20 screening mammograms while wearing a head-mounted eye-tracker. Texture analysis was performed in mammographic regions that attracted radiologists attention and in all abnormal regions. Machine learning algorithms were investigated to develop predictive models that link: (i) image content with gaze, (ii) image content and gaze with cognition, and (iii) image content, gaze, and cognition with diagnostic error. Both group-based and individualized models were explored. Results: By poolingmore » the data from all radiologists machine learning produced highly accurate predictive models linking image content, gaze, cognition, and error. Merging radiologists gaze metrics and cognitive opinions with computer-extracted image features identified 59% of the radiologists diagnostic errors while confirming 96.2% of their correct diagnoses. The radiologists individual errors could be adequately predicted by modeling the behavior of their peers. However, personalized tuning appears to be beneficial in many cases to capture more accurately individual behavior. Conclusions: Machine learning algorithms combining image features with radiologists gaze data and diagnostic decisions can be effectively developed to recognize cognitive and perceptual errors associated with the diagnostic interpretation of mammograms.« less

  20. Reproducibility of radiomics for deciphering tumor phenotype with imaging

    NASA Astrophysics Data System (ADS)

    Zhao, Binsheng; Tan, Yongqiang; Tsai, Wei-Yann; Qi, Jing; Xie, Chuanmiao; Lu, Lin; Schwartz, Lawrence H.

    2016-03-01

    Radiomics (radiogenomics) characterizes tumor phenotypes based on quantitative image features derived from routine radiologic imaging to improve cancer diagnosis, prognosis, prediction and response to therapy. Although radiomic features must be reproducible to qualify as biomarkers for clinical care, little is known about how routine imaging acquisition techniques/parameters affect reproducibility. To begin to fill this knowledge gap, we assessed the reproducibility of a comprehensive, commonly-used set of radiomic features using a unique, same-day repeat computed tomography data set from lung cancer patients. Each scan was reconstructed at 6 imaging settings, varying slice thicknesses (1.25 mm, 2.5 mm and 5 mm) and reconstruction algorithms (sharp, smooth). Reproducibility was assessed using the repeat scans reconstructed at identical imaging setting (6 settings in total). In separate analyses, we explored differences in radiomic features due to different imaging parameters by assessing the agreement of these radiomic features extracted from the repeat scans reconstructed at the same slice thickness but different algorithms (3 settings in total). Our data suggest that radiomic features are reproducible over a wide range of imaging settings. However, smooth and sharp reconstruction algorithms should not be used interchangeably. These findings will raise awareness of the importance of properly setting imaging acquisition parameters in radiomics/radiogenomics research.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, B; Fujita, A; Buch, K

    Purpose: To investigate the correlation between texture analysis-based model observer and human observer in the task of diagnosis of ischemic infarct in non-contrast head CT of adults. Methods: Non-contrast head CTs of five patients (2 M, 3 F; 58–83 y) with ischemic infarcts were retro-reconstructed using FBP and Adaptive Statistical Iterative Reconstruction (ASIR) of various levels (10–100%). Six neuro -radiologists reviewed each image and scored image quality for diagnosing acute infarcts by a 9-point Likert scale in a blinded test. These scores were averaged across the observers to produce the average human observer responses. The chief neuro-radiologist placed multiple ROIsmore » over the infarcts. These ROIs were entered into a texture analysis software package. Forty-two features per image, including 11 GLRL, 5 GLCM, 4 GLGM, 9 Laws, and 13 2-D features, were computed and averaged over the images per dataset. The Fisher-coefficient (ratio of between-class variance to in-class variance) was calculated for each feature to identify the most discriminating features from each matrix that separate the different confidence scores most efficiently. The 15 features with the highest Fisher -coefficient were entered into linear multivariate regression for iterative modeling. Results: Multivariate regression analysis resulted in the best prediction model of the confidence scores after three iterations (df=11, F=11.7, p-value<0.0001). The model predicted scores and human observers were highly correlated (R=0.88, R-sq=0.77). The root-mean-square and maximal residual were 0.21 and 0.44, respectively. The residual scatter plot appeared random, symmetric, and unbiased. Conclusion: For diagnosis of ischemic infarct in non-contrast head CT in adults, the predicted image quality scores from texture analysis-based model observer was highly correlated with that of human observers for various noise levels. Texture-based model observer can characterize image quality of low contrast, subtle texture changes in addition to human observers.« less

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, X; Zhou, Z; Thomas, K

    Purpose: The goal of this work is to investigate the use of contrast enhanced computed tomographic (CT) features for the prediction of mutations of BAP1, PBRM1, and VHL genes in renal cell carcinoma (RCC). Methods: For this study, we used two patient databases with renal cell carcinoma (RCC). The first one consisted of 33 patients from our institution (UT Southwestern Medical Center, UTSW). The second one consisted of 24 patients from the Cancer Imaging Archive (TCIA), where each patient is connected by a unique identi?er to the tissue samples from the Cancer Genome Atlas (TCGA). From the contrast enhanced CTmore » image of each patient, tumor contour was first delineated by a physician. Geometry, intensity, and texture features were extracted from the delineated tumor. Based on UTSW dataset, we completed feature selection and trained a support vector machine (SVM) classifier to predict mutations of BAP1, PBRM1 and VHL genes. We then used TCIA-TCGA dataset to validate the predictive model build upon UTSW dataset. Results: The prediction accuracy of gene expression of TCIA-TCGA patients was 0.83 (20 of 24), 0.83 (20 of 24), and 0.75 (18 of 24) for BAP1, PBRM1, and VHL respectively. For BAP1 gene, texture feature was the most prominent feature type. For PBRM1 gene, intensity feature was the most prominent. For VHL gene, geometry, intensity, and texture features were all important. Conclusion: Using our feature selection strategy and models, we achieved predictive accuracy over 0.75 for all three genes under the condition of using patient data from one institution for training and data from other institutions for testing. These results suggest that radiogenomics can be used to aid in prognosis and used as convenient surrogates for expensive and time consuming gene assay procedures.« less

  3. Salient in space, salient in time: Fixation probability predicts fixation duration during natural scene viewing.

    PubMed

    Einhäuser, Wolfgang; Nuthmann, Antje

    2016-09-01

    During natural scene viewing, humans typically attend and fixate selected locations for about 200-400 ms. Two variables characterize such "overt" attention: the probability of a location being fixated, and the fixation's duration. Both variables have been widely researched, but little is known about their relation. We use a two-step approach to investigate the relation between fixation probability and duration. In the first step, we use a large corpus of fixation data. We demonstrate that fixation probability (empirical salience) predicts fixation duration across different observers and tasks. Linear mixed-effects modeling shows that this relation is explained neither by joint dependencies on simple image features (luminance, contrast, edge density) nor by spatial biases (central bias). In the second step, we experimentally manipulate some of these features. We find that fixation probability from the corpus data still predicts fixation duration for this new set of experimental data. This holds even if stimuli are deprived of low-level images features, as long as higher level scene structure remains intact. Together, this shows a robust relation between fixation duration and probability, which does not depend on simple image features. Moreover, the study exemplifies the combination of empirical research on a large corpus of data with targeted experimental manipulations.

  4. Modeling Habitat Suitability of Migratory Birds from Remote Sensing Images Using Convolutional Neural Networks.

    PubMed

    Su, Jin-He; Piao, Ying-Chao; Luo, Ze; Yan, Bao-Ping

    2018-04-26

    With the application of various data acquisition devices, a large number of animal movement data can be used to label presence data in remote sensing images and predict species distribution. In this paper, a two-stage classification approach for combining movement data and moderate-resolution remote sensing images was proposed. First, we introduced a new density-based clustering method to identify stopovers from migratory birds’ movement data and generated classification samples based on the clustering result. We split the remote sensing images into 16 × 16 patches and labeled them as positive samples if they have overlap with stopovers. Second, a multi-convolution neural network model is proposed for extracting the features from temperature data and remote sensing images, respectively. Then a Support Vector Machines (SVM) model was used to combine the features together and predict classification results eventually. The experimental analysis was carried out on public Landsat 5 TM images and a GPS dataset was collected on 29 birds over three years. The results indicated that our proposed method outperforms the existing baseline methods and was able to achieve good performance in habitat suitability prediction.

  5. A Feature-based Developmental Model of the Infant Brain in Structural MRI

    PubMed Central

    Toews, Matthew; Wells, William M.; Zöllei, Lilla

    2014-01-01

    In this paper, anatomical development is modeled as a collection of distinctive image patterns localized in space and time. A Bayesian posterior probability is defined over a random variable of subject age, conditioned on data in the form of scale-invariant image features. The model is automatically learned from a large set of images exhibiting significant variation, used to discover anatomical structure related to age and development, and fit to new images to predict age. The model is applied to a set of 230 infant structural MRIs of 92 subjects acquired at multiple sites over an age range of 8-590 days. Experiments demonstrate that the model can be used to identify age-related anatomical structure, and to predict the age of new subjects with an average error of 72 days. PMID:23286050

  6. Intratumor partitioning and texture analysis of dynamic contrast-enhanced (DCE)-MRI identifies relevant tumor subregions to predict pathological response of breast cancer to neoadjuvant chemotherapy.

    PubMed

    Wu, Jia; Gong, Guanghua; Cui, Yi; Li, Ruijiang

    2016-11-01

    To predict pathological response of breast cancer to neoadjuvant chemotherapy (NAC) based on quantitative, multiregion analysis of dynamic contrast enhancement magnetic resonance imaging (DCE-MRI). In this Institutional Review Board-approved study, 35 patients diagnosed with stage II/III breast cancer were retrospectively investigated using 3T DCE-MR images acquired before and after the first cycle of NAC. First, principal component analysis (PCA) was used to reduce the dimensionality of the DCE-MRI data with high temporal resolution. We then partitioned the whole tumor into multiple subregions using k-means clustering based on the PCA-defined eigenmaps. Within each tumor subregion, we extracted four quantitative Haralick texture features based on the gray-level co-occurrence matrix (GLCM). The change in texture features in each tumor subregion between pre- and during-NAC was used to predict pathological complete response after NAC. Three tumor subregions were identified through clustering, each with distinct enhancement characteristics. In univariate analysis, all imaging predictors except one extracted from the tumor subregion associated with fast washout were statistically significant (P < 0.05) after correcting for multiple testing, with area under the receiver operating characteristic (ROC) curve (AUC) or AUCs between 0.75 and 0.80. In multivariate analysis, the proposed imaging predictors achieved an AUC of 0.79 (P = 0.002) in leave-one-out cross-validation. This improved upon conventional imaging predictors such as tumor volume (AUC = 0.53) and texture features based on whole-tumor analysis (AUC = 0.65). The heterogeneity of the tumor subregion associated with fast washout on DCE-MRI predicted pathological response to NAC in breast cancer. J. Magn. Reson. Imaging 2016;44:1107-1115. © 2016 International Society for Magnetic Resonance in Medicine.

  7. Near-infrared spectral image analysis of pork marbling based on Gabor filter and wide line detector techniques.

    PubMed

    Huang, Hui; Liu, Li; Ngadi, Michael O; Gariépy, Claude; Prasher, Shiv O

    2014-01-01

    Marbling is an important quality attribute of pork. Detection of pork marbling usually involves subjective scoring, which raises the efficiency costs to the processor. In this study, the ability to predict pork marbling using near-infrared (NIR) hyperspectral imaging (900-1700 nm) and the proper image processing techniques were studied. Near-infrared images were collected from pork after marbling evaluation according to current standard chart from the National Pork Producers Council. Image analysis techniques-Gabor filter, wide line detector, and spectral averaging-were applied to extract texture, line, and spectral features, respectively, from NIR images of pork. Samples were grouped into calibration and validation sets. Wavelength selection was performed on calibration set by stepwise regression procedure. Prediction models of pork marbling scores were built using multiple linear regressions based on derivatives of mean spectra and line features at key wavelengths. The results showed that the derivatives of both texture and spectral features produced good results, with correlation coefficients of validation of 0.90 and 0.86, respectively, using wavelengths of 961, 1186, and 1220 nm. The results revealed the great potential of the Gabor filter for analyzing NIR images of pork for the effective and efficient objective evaluation of pork marbling.

  8. WE-G-BRD-09: Prediction of Local Control/Failure by Using Feature Histogram Selection in Follow-Up T2-Weighted MR Image in Spinal Tumors After Stereotactic Body Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, J; Harb, J; Jawad, M

    2014-06-15

    Purpose: In follow-up T2-weighted MR images of spinal tumor patients treated with stereotactic body radiation therapy (SBRT), high intensity features embedded in dark surroundings may suggest a local failure (LF). We investigated image intensity histogram in imaging features to predict LF and local control (LC). Methods: Sixty-seven spinal tumors were treated with SBRT at our institution with scheduled follow-up MR T2-weighted (TR 3200–6600ms; TE 75-132ms) imaging. The LF group included 10 tumors with 8.7 months median follow-up, while the LC group had 11 tumors with 24.1 months median follow-up. The follow-up images were fused to the planning CT. Image intensitymore » histograms of the GTV were calculated. Voxels in greater than 90% (V90), 80% (V80), and peak (Vpeak) of the histogram were grouped into sub-ROIs to determine the best feature histogram. The intensity of each sub-ROI was evaluated using the mean T2-weighted signal ratio (intensity in sub-ROI / intensity in normal vertebrae). An ROC curve in predicting LF for each sub-ROI was calculated to determine the best feature histogram parameter for LF prediction. Results: Mean T2-weighted signal ratio in the LF group was significantly higher than that in the LC group for all sub-ROIs (1.1±0.4 vs. 0.7±0.2, 1.2±0.4 vs. 0.8±0.2, 1.4±0.5 vs. 0.8±0.2, for V90, V80, and Vpeak, p=0.02, 0.02, and 0.002, respectively). The corresponding areas-under-curve (AUC) of ROC were 0.78, 0.80, and 0.87, p=0.02, 0.03, 0.004, respectively. No correlation was found between T2-weighted signal ratio in Vpeak and follow-up time (Pearson's ρ=0.15). Conclusion: Increased T2-weighted signal can be used to identify local failure while decreased signal indicates local control after spinal SBRT. By choosing the best histogram parameter (here the Vpeak), the AUC of the ROC can be substantially improved, which implies reliable prediction of LC and LF. These results are being further studied and validated with large multi-institutional data.« less

  9. Color image definition evaluation method based on deep learning method

    NASA Astrophysics Data System (ADS)

    Liu, Di; Li, YingChun

    2018-01-01

    In order to evaluate different blurring levels of color image and improve the method of image definition evaluation, this paper proposed a method based on the depth learning framework and BP neural network classification model, and presents a non-reference color image clarity evaluation method. Firstly, using VGG16 net as the feature extractor to extract 4,096 dimensions features of the images, then the extracted features and labeled images are employed in BP neural network to train. And finally achieve the color image definition evaluation. The method in this paper are experimented by using images from the CSIQ database. The images are blurred at different levels. There are 4,000 images after the processing. Dividing the 4,000 images into three categories, each category represents a blur level. 300 out of 400 high-dimensional features are trained in VGG16 net and BP neural network, and the rest of 100 samples are tested. The experimental results show that the method can take full advantage of the learning and characterization capability of deep learning. Referring to the current shortcomings of the major existing image clarity evaluation methods, which manually design and extract features. The method in this paper can extract the images features automatically, and has got excellent image quality classification accuracy for the test data set. The accuracy rate is 96%. Moreover, the predicted quality levels of original color images are similar to the perception of the human visual system.

  10. SU-E-J-275: Review - Computerized PET/CT Image Analysis in the Evaluation of Tumor Response to Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, W; Wang, J; Zhang, H

    Purpose: To review the literature in using computerized PET/CT image analysis for the evaluation of tumor response to therapy. Methods: We reviewed and summarized more than 100 papers that used computerized image analysis techniques for the evaluation of tumor response with PET/CT. This review mainly covered four aspects: image registration, tumor segmentation, image feature extraction, and response evaluation. Results: Although rigid image registration is straightforward, it has been shown to achieve good alignment between baseline and evaluation scans. Deformable image registration has been shown to improve the alignment when complex deformable distortions occur due to tumor shrinkage, weight loss ormore » gain, and motion. Many semi-automatic tumor segmentation methods have been developed on PET. A comparative study revealed benefits of high levels of user interaction with simultaneous visualization of CT images and PET gradients. On CT, semi-automatic methods have been developed for only tumors that show marked difference in CT attenuation between the tumor and the surrounding normal tissues. Quite a few multi-modality segmentation methods have been shown to improve accuracy compared to single-modality algorithms. Advanced PET image features considering spatial information, such as tumor volume, tumor shape, total glycolytic volume, histogram distance, and texture features have been found more informative than the traditional SUVmax for the prediction of tumor response. Advanced CT features, including volumetric, attenuation, morphologic, structure, and texture descriptors, have also been found advantage over the traditional RECIST and WHO criteria in certain tumor types. Predictive models based on machine learning technique have been constructed for correlating selected image features to response. These models showed improved performance compared to current methods using cutoff value of a single measurement for tumor response. Conclusion: This review showed that computerized PET/CT image analysis holds great potential to improve the accuracy in evaluation of tumor response. This work was supported in part by the National Cancer Institute Grant R01CA172638.« less

  11. Multi Texture Analysis of Colorectal Cancer Continuum Using Multispectral Imagery

    PubMed Central

    Chaddad, Ahmad; Desrosiers, Christian; Bouridane, Ahmed; Toews, Matthew; Hassan, Lama; Tanougast, Camel

    2016-01-01

    Purpose This paper proposes to characterize the continuum of colorectal cancer (CRC) using multiple texture features extracted from multispectral optical microscopy images. Three types of pathological tissues (PT) are considered: benign hyperplasia, intraepithelial neoplasia and carcinoma. Materials and Methods In the proposed approach, the region of interest containing PT is first extracted from multispectral images using active contour segmentation. This region is then encoded using texture features based on the Laplacian-of-Gaussian (LoG) filter, discrete wavelets (DW) and gray level co-occurrence matrices (GLCM). To assess the significance of textural differences between PT types, a statistical analysis based on the Kruskal-Wallis test is performed. The usefulness of texture features is then evaluated quantitatively in terms of their ability to predict PT types using various classifier models. Results Preliminary results show significant texture differences between PT types, for all texture features (p-value < 0.01). Individually, GLCM texture features outperform LoG and DW features in terms of PT type prediction. However, a higher performance can be achieved by combining all texture features, resulting in a mean classification accuracy of 98.92%, sensitivity of 98.12%, and specificity of 99.67%. Conclusions These results demonstrate the efficiency and effectiveness of combining multiple texture features for characterizing the continuum of CRC and discriminating between pathological tissues in multispectral images. PMID:26901134

  12. Irregular echogenic foci representing coagulation necrosis: a useful but perhaps under-recognized EUS echo feature of malignant lymph node invasion.

    PubMed

    Bhutani, Manoop S; Saftoiu, Adrian; Chaya, Charles; Gupta, Parantap; Markowitz, Avi B; Willis, Maurice; Kessel, Ivan; Sharma, Gulshan; Zwischenberger, Joseph B

    2009-06-01

    Coagulation necrosis has been described in malignant lymph nodes. Our aim was to determine if coagulation necrosis in mediastinal lymph nodes imaged by EUS could be used as a useful echo feature for predicting malignant invasion. Patients with known or suspected lung cancer who had undergone mediastinal lymph node staging by EUS. Tertiary Care university hospital. An expert endosonographer blinded to the final diagnosis, reviewed the archived digital EUS images of lymph nodes prior to being sampled by FNA. LNs positive for malignancy by FNA were included. The benign group included lymph node images with either negative EUS-FNA or lymph nodes imaged by EUS but not subjected to EUS-FNA, with surgical correlation of their benign nature. 24 patients were included. 8 patients were found to have coagulation necrosis. 7/8 patients had positive result for malignancy by EUS-FNA. One patient determined to have coagulation necrosis had a non-malignant diagnosis indicating a false positive result. 16 patients had no coagulation necrosis. In 6 patients with no coagulation necrosis, the final diagnosis was malignant and in the remaining 10 cases, the final diagnosis was benign. For coagulation necrosis as an echo feature for malignant invasion, sensitivity was 54%, specificity was 91%, positive predictive value was 88%, negative predictive value was 63% and accuracy was 71%. Coagulation necrosis is a useful echo feature for mediastinal lymph node staging by EUS.

  13. Classification of focal liver lesions on ultrasound images by extracting hybrid textural features and using an artificial neural network.

    PubMed

    Hwang, Yoo Na; Lee, Ju Hwan; Kim, Ga Young; Jiang, Yuan Yuan; Kim, Sung Min

    2015-01-01

    This paper focuses on the improvement of the diagnostic accuracy of focal liver lesions by quantifying the key features of cysts, hemangiomas, and malignant lesions on ultrasound images. The focal liver lesions were divided into 29 cysts, 37 hemangiomas, and 33 malignancies. A total of 42 hybrid textural features that composed of 5 first order statistics, 18 gray level co-occurrence matrices, 18 Law's, and echogenicity were extracted. A total of 29 key features that were selected by principal component analysis were used as a set of inputs for a feed-forward neural network. For each lesion, the performance of the diagnosis was evaluated by using the positive predictive value, negative predictive value, sensitivity, specificity, and accuracy. The results of the experiment indicate that the proposed method exhibits great performance, a high diagnosis accuracy of over 96% among all focal liver lesion groups (cyst vs. hemangioma, cyst vs. malignant, and hemangioma vs. malignant) on ultrasound images. The accuracy was slightly increased when echogenicity was included in the optimal feature set. These results indicate that it is possible for the proposed method to be applied clinically.

  14. Intratumor heterogeneity of DCE-MRI reveals Ki-67 proliferation status in breast cancer

    NASA Astrophysics Data System (ADS)

    Cheng, Hu; Fan, Ming; Zhang, Peng; Liu, Bin; Shao, Guoliang; Li, Lihua

    2018-03-01

    Breast cancer is a highly heterogeneous disease both biologically and clinically, and certain pathologic parameters, i.e., Ki67 expression, are useful in predicting the prognosis of patients. The aim of the study is to identify intratumor heterogeneity of breast cancer for predicting Ki-67 proliferation status in estrogen receptor (ER)-positive breast cancer patients. A dataset of 77 patients was collected who underwent dynamic contrast enhancement magnetic resonance imaging (DCE-MRI) examination. Of these patients, 51 were high-Ki-67 expression and 26 were low-Ki-67 expression. We partitioned the breast tumor into subregions using two methods based on the values of time to peak (TTP) and peak enhancement rate (PER). Within each tumor subregion, image features were extracted including statistical and morphological features from DCE-MRI. The classification models were applied on each region separately to assess whether the classifiers based on features extracted from various subregions features could have different performance for prediction. An area under a receiver operating characteristic curve (AUC) was computed using leave-one-out cross-validation (LOOCV) method. The classifier using features related with moderate time to peak achieved best performance with AUC of 0.826 than that based on the other regions. While using multi-classifier fusion method, the AUC value was significantly (P=0.03) increased to 0.858+/-0.032 compare to classifier with AUC of 0.778 using features from the entire tumor. The results demonstrated that features reflect heterogeneity in intratumoral subregions can improve the classifier performance to predict the Ki-67 proliferation status than the classifier using features from entire tumor alone.

  15. A Hybrid Probabilistic Model for Unified Collaborative and Content-Based Image Tagging.

    PubMed

    Zhou, Ning; Cheung, William K; Qiu, Guoping; Xue, Xiangyang

    2011-07-01

    The increasing availability of large quantities of user contributed images with labels has provided opportunities to develop automatic tools to tag images to facilitate image search and retrieval. In this paper, we present a novel hybrid probabilistic model (HPM) which integrates low-level image features and high-level user provided tags to automatically tag images. For images without any tags, HPM predicts new tags based solely on the low-level image features. For images with user provided tags, HPM jointly exploits both the image features and the tags in a unified probabilistic framework to recommend additional tags to label the images. The HPM framework makes use of the tag-image association matrix (TIAM). However, since the number of images is usually very large and user-provided tags are diverse, TIAM is very sparse, thus making it difficult to reliably estimate tag-to-tag co-occurrence probabilities. We developed a collaborative filtering method based on nonnegative matrix factorization (NMF) for tackling this data sparsity issue. Also, an L1 norm kernel method is used to estimate the correlations between image features and semantic concepts. The effectiveness of the proposed approach has been evaluated using three databases containing 5,000 images with 371 tags, 31,695 images with 5,587 tags, and 269,648 images with 5,018 tags, respectively.

  16. Flare Prediction Using Photospheric and Coronal Image Data

    NASA Astrophysics Data System (ADS)

    Jonas, Eric; Bobra, Monica; Shankar, Vaishaal; Todd Hoeksema, J.; Recht, Benjamin

    2018-03-01

    The precise physical process that triggers solar flares is not currently understood. Here we attempt to capture the signature of this mechanism in solar-image data of various wavelengths and use these signatures to predict flaring activity. We do this by developing an algorithm that i) automatically generates features in 5.5 TB of image data taken by the Solar Dynamics Observatory of the solar photosphere, chromosphere, transition region, and corona during the time period between May 2010 and May 2014, ii) combines these features with other features based on flaring history and a physical understanding of putative flaring processes, and iii) classifies these features to predict whether a solar active region will flare within a time period of T hours, where T = 2 and 24. Such an approach may be useful since, at the present time, there are no physical models of flares available for real-time prediction. We find that when optimizing for the True Skill Score (TSS), photospheric vector-magnetic-field data combined with flaring history yields the best performance, and when optimizing for the area under the precision-recall curve, all of the data are helpful. Our model performance yields a TSS of 0.84 ±0.03 and 0.81 ±0.03 in the T = 2- and 24-hour cases, respectively, and a value of 0.13 ±0.07 and 0.43 ±0.08 for the area under the precision-recall curve in the T=2- and 24-hour cases, respectively. These relatively high scores are competitive with previous attempts at solar prediction, but our different methodology and extreme care in task design and experimental setup provide an independent confirmation of these results. Given the similar values of algorithm performance across various types of models reported in the literature, we conclude that we can expect a certain baseline predictive capacity using these data. We believe that this is the first attempt to predict solar flares using photospheric vector-magnetic field data as well as multiple wavelengths of image data from the chromosphere, transition region, and corona, and it points the way towards greater data integration across diverse sources in future work.

  17. Reproducibility and Prognosis of Quantitative Features Extracted from CT Images12

    PubMed Central

    Balagurunathan, Yoganand; Gu, Yuhua; Wang, Hua; Kumar, Virendra; Grove, Olya; Hawkins, Sam; Kim, Jongphil; Goldgof, Dmitry B; Hall, Lawrence O; Gatenby, Robert A; Gillies, Robert J

    2014-01-01

    We study the reproducibility of quantitative imaging features that are used to describe tumor shape, size, and texture from computed tomography (CT) scans of non-small cell lung cancer (NSCLC). CT images are dependent on various scanning factors. We focus on characterizing image features that are reproducible in the presence of variations due to patient factors and segmentation methods. Thirty-two NSCLC nonenhanced lung CT scans were obtained from the Reference Image Database to Evaluate Response data set. The tumors were segmented using both manual (radiologist expert) and ensemble (software-automated) methods. A set of features (219 three-dimensional and 110 two-dimensional) was computed, and quantitative image features were statistically filtered to identify a subset of reproducible and nonredundant features. The variability in the repeated experiment was measured by the test-retest concordance correlation coefficient (CCCTreT). The natural range in the features, normalized to variance, was measured by the dynamic range (DR). In this study, there were 29 features across segmentation methods found with CCCTreT and DR ≥ 0.9 and R2Bet ≥ 0.95. These reproducible features were tested for predicting radiologist prognostic score; some texture features (run-length and Laws kernels) had an area under the curve of 0.9. The representative features were tested for their prognostic capabilities using an independent NSCLC data set (59 lung adenocarcinomas), where one of the texture features, run-length gray-level nonuniformity, was statistically significant in separating the samples into survival groups (P ≤ .046). PMID:24772210

  18. Enhancement of multimodality texture-based prediction models via optimization of PET and MR image acquisition protocols: a proof of concept

    NASA Astrophysics Data System (ADS)

    Vallières, Martin; Laberge, Sébastien; Diamant, André; El Naqa, Issam

    2017-11-01

    Texture-based radiomic models constructed from medical images have the potential to support cancer treatment management via personalized assessment of tumour aggressiveness. While the identification of stable texture features under varying imaging settings is crucial for the translation of radiomics analysis into routine clinical practice, we hypothesize in this work that a complementary optimization of image acquisition parameters prior to texture feature extraction could enhance the predictive performance of texture-based radiomic models. As a proof of concept, we evaluated the possibility of enhancing a model constructed for the early prediction of lung metastases in soft-tissue sarcomas by optimizing PET and MR image acquisition protocols via computerized simulations of image acquisitions with varying parameters. Simulated PET images from 30 STS patients were acquired by varying the extent of axial data combined per slice (‘span’). Simulated T 1-weighted and T 2-weighted MR images were acquired by varying the repetition time and echo time in a spin-echo pulse sequence, respectively. We analyzed the impact of the variations of PET and MR image acquisition parameters on individual textures, and we investigated how these variations could enhance the global response and the predictive properties of a texture-based model. Our results suggest that it is feasible to identify an optimal set of image acquisition parameters to improve prediction performance. The model constructed with textures extracted from simulated images acquired with a standard clinical set of acquisition parameters reached an average AUC of 0.84 +/- 0.01 in bootstrap testing experiments. In comparison, the model performance significantly increased using an optimal set of image acquisition parameters (p = 0.04 ), with an average AUC of 0.89 +/- 0.01 . Ultimately, specific acquisition protocols optimized to generate superior radiomics measurements for a given clinical problem could be developed and standardized via dedicated computer simulations and thereafter validated using clinical scanners.

  19. SU-F-R-24: Identifying Prognostic Imaging Biomarkers in Early Stage Lung Cancer Using Radiomics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeng, X; Wu, J; Cui, Y

    2016-06-15

    Purpose: Patients diagnosed with early stage lung cancer have favorable outcomes when treated with surgery or stereotactic radiotherapy. However, a significant proportion (∼20%) of patients will develop metastatic disease and eventually die of the disease. The purpose of this work is to identify quantitative imaging biomarkers from CT for predicting overall survival in early stage lung cancer. Methods: In this institutional review board-approved HIPPA-compliant retrospective study, we retrospectively analyzed the diagnostic CT scans of 110 patients with early stage lung cancer. Data from 70 patients were used for training/discovery purposes, while those of remaining 40 patients were used for independentmore » validation. We extracted 191 radiomic features, including statistical, histogram, morphological, and texture features. Cox proportional hazard regression model, coupled with the least absolute shrinkage and selection operator (LASSO), was used to predict overall survival based on the radiomic features. Results: The optimal prognostic model included three image features from the Law’s feature and wavelet texture. In the discovery cohort, this model achieved a concordance index or CI=0.67, and it separated the low-risk from high-risk groups in predicting overall survival (hazard ratio=2.72, log-rank p=0.007). In the independent validation cohort, this radiomic signature achieved a CI=0.62, and significantly stratified the low-risk and high-risk groups in terms of overall survival (hazard ratio=2.20, log-rank p=0.042). Conclusion: We identified CT imaging characteristics associated with overall survival in early stage lung cancer. If prospectively validated, this could potentially help identify high-risk patients who might benefit from adjuvant systemic therapy.« less

  20. Alzheimer's Disease Early Diagnosis Using Manifold-Based Semi-Supervised Learning.

    PubMed

    Khajehnejad, Moein; Saatlou, Forough Habibollahi; Mohammadzade, Hoda

    2017-08-20

    Alzheimer's disease (AD) is currently ranked as the sixth leading cause of death in the United States and recent estimates indicate that the disorder may rank third, just behind heart disease and cancer, as a cause of death for older people. Clearly, predicting this disease in the early stages and preventing it from progressing is of great importance. The diagnosis of Alzheimer's disease (AD) requires a variety of medical tests, which leads to huge amounts of multivariate heterogeneous data. It can be difficult and exhausting to manually compare, visualize, and analyze this data due to the heterogeneous nature of medical tests; therefore, an efficient approach for accurate prediction of the condition of the brain through the classification of magnetic resonance imaging (MRI) images is greatly beneficial and yet very challenging. In this paper, a novel approach is proposed for the diagnosis of very early stages of AD through an efficient classification of brain MRI images, which uses label propagation in a manifold-based semi-supervised learning framework. We first apply voxel morphometry analysis to extract some of the most critical AD-related features of brain images from the original MRI volumes and also gray matter (GM) segmentation volumes. The features must capture the most discriminative properties that vary between a healthy and Alzheimer-affected brain. Next, we perform a principal component analysis (PCA)-based dimension reduction on the extracted features for faster yet sufficiently accurate analysis. To make the best use of the captured features, we present a hybrid manifold learning framework which embeds the feature vectors in a subspace. Next, using a small set of labeled training data, we apply a label propagation method in the created manifold space to predict the labels of the remaining images and classify them in the two groups of mild Alzheimer's and normal condition (MCI/NC). The accuracy of the classification using the proposed method is 93.86% for the Open Access Series of Imaging Studies (OASIS) database of MRI brain images, providing, compared to the best existing methods, a 3% lower error rate.

  1. Developing a clinical utility framework to evaluate prediction models in radiogenomics

    NASA Astrophysics Data System (ADS)

    Wu, Yirong; Liu, Jie; Munoz del Rio, Alejandro; Page, David C.; Alagoz, Oguzhan; Peissig, Peggy; Onitilo, Adedayo A.; Burnside, Elizabeth S.

    2015-03-01

    Combining imaging and genetic information to predict disease presence and behavior is being codified into an emerging discipline called "radiogenomics." Optimal evaluation methodologies for radiogenomics techniques have not been established. We aim to develop a clinical decision framework based on utility analysis to assess prediction models for breast cancer. Our data comes from a retrospective case-control study, collecting Gail model risk factors, genetic variants (single nucleotide polymorphisms-SNPs), and mammographic features in Breast Imaging Reporting and Data System (BI-RADS) lexicon. We first constructed three logistic regression models built on different sets of predictive features: (1) Gail, (2) Gail+SNP, and (3) Gail+SNP+BI-RADS. Then, we generated ROC curves for three models. After we assigned utility values for each category of findings (true negative, false positive, false negative and true positive), we pursued optimal operating points on ROC curves to achieve maximum expected utility (MEU) of breast cancer diagnosis. We used McNemar's test to compare the predictive performance of the three models. We found that SNPs and BI-RADS features augmented the baseline Gail model in terms of the area under ROC curve (AUC) and MEU. SNPs improved sensitivity of the Gail model (0.276 vs. 0.147) and reduced specificity (0.855 vs. 0.912). When additional mammographic features were added, sensitivity increased to 0.457 and specificity to 0.872. SNPs and mammographic features played a significant role in breast cancer risk estimation (p-value < 0.001). Our decision framework comprising utility analysis and McNemar's test provides a novel framework to evaluate prediction models in the realm of radiogenomics.

  2. Jupiter from the Ground

    NASA Image and Video Library

    2011-08-03

    Ground-based astronomers will be playing a vital role in NASA Juno mission. Images from the amateur astronomy community are needed to help the JunoCam instrument team predict what features will be visible when the camera images are taken.

  3. Predicting axillary lymph node metastasis from kinetic statistics of DCE-MRI breast images

    NASA Astrophysics Data System (ADS)

    Ashraf, Ahmed B.; Lin, Lilie; Gavenonis, Sara C.; Mies, Carolyn; Xanthopoulos, Eric; Kontos, Despina

    2012-03-01

    The presence of axillary lymph node metastases is the most important prognostic factor in breast cancer and can influence the selection of adjuvant therapy, both chemotherapy and radiotherapy. In this work we present a set of kinetic statistics derived from DCE-MRI for predicting axillary node status. Breast DCE-MRI images from 69 women with known nodal status were analyzed retrospectively under HIPAA and IRB approval. Axillary lymph nodes were positive in 12 patients while 57 patients had no axillary lymph node involvement. Kinetic curves for each pixel were computed and a pixel-wise map of time-to-peak (TTP) was obtained. Pixels were first partitioned according to the similarity of their kinetic behavior, based on TTP values. For every kinetic curve, the following pixel-wise features were computed: peak enhancement (PE), wash-in-slope (WIS), wash-out-slope (WOS). Partition-wise statistics for every feature map were calculated, resulting in a total of 21 kinetic statistic features. ANOVA analysis was done to select features that differ significantly between node positive and node negative women. Using the computed kinetic statistic features a leave-one-out SVM classifier was learned that performs with AUC=0.77 under the ROC curve, outperforming the conventional kinetic measures, including maximum peak enhancement (MPE) and signal enhancement ratio (SER), (AUCs of 0.61 and 0.57 respectively). These findings suggest that our DCE-MRI kinetic statistic features can be used to improve the prediction of axillary node status in breast cancer patients. Such features could ultimately be used as imaging biomarkers to guide personalized treatment choices for women diagnosed with breast cancer.

  4. Prediction of high-dimensional states subject to respiratory motion: a manifold learning approach

    NASA Astrophysics Data System (ADS)

    Liu, Wenyang; Sawant, Amit; Ruan, Dan

    2016-07-01

    The development of high-dimensional imaging systems in image-guided radiotherapy provides important pathways to the ultimate goal of real-time full volumetric motion monitoring. Effective motion management during radiation treatment usually requires prediction to account for system latency and extra signal/image processing time. It is challenging to predict high-dimensional respiratory motion due to the complexity of the motion pattern combined with the curse of dimensionality. Linear dimension reduction methods such as PCA have been used to construct a linear subspace from the high-dimensional data, followed by efficient predictions on the lower-dimensional subspace. In this study, we extend such rationale to a more general manifold and propose a framework for high-dimensional motion prediction with manifold learning, which allows one to learn more descriptive features compared to linear methods with comparable dimensions. Specifically, a kernel PCA is used to construct a proper low-dimensional feature manifold, where accurate and efficient prediction can be performed. A fixed-point iterative pre-image estimation method is used to recover the predicted value in the original state space. We evaluated and compared the proposed method with a PCA-based approach on level-set surfaces reconstructed from point clouds captured by a 3D photogrammetry system. The prediction accuracy was evaluated in terms of root-mean-squared-error. Our proposed method achieved consistent higher prediction accuracy (sub-millimeter) for both 200 ms and 600 ms lookahead lengths compared to the PCA-based approach, and the performance gain was statistically significant.

  5. Predicting Response to Neoadjuvant Chemoradiotherapy in Esophageal Cancer with Textural Features Derived from Pretreatment 18F-FDG PET/CT Imaging.

    PubMed

    Beukinga, Roelof J; Hulshoff, Jan B; van Dijk, Lisanne V; Muijs, Christina T; Burgerhof, Johannes G M; Kats-Ugurlu, Gursah; Slart, Riemer H J A; Slump, Cornelis H; Mul, Véronique E M; Plukker, John Th M

    2017-05-01

    Adequate prediction of tumor response to neoadjuvant chemoradiotherapy (nCRT) in esophageal cancer (EC) patients is important in a more personalized treatment. The current best clinical method to predict pathologic complete response is SUV max in 18 F-FDG PET/CT imaging. To improve the prediction of response, we constructed a model to predict complete response to nCRT in EC based on pretreatment clinical parameters and 18 F-FDG PET/CT-derived textural features. Methods: From a prospectively maintained single-institution database, we reviewed 97 consecutive patients with locally advanced EC and a pretreatment 18 F-FDG PET/CT scan between 2009 and 2015. All patients were treated with nCRT (carboplatin/paclitaxel/41.4 Gy) followed by esophagectomy. We analyzed clinical, geometric, and pretreatment textural features extracted from both 18 F-FDG PET and CT. The current most accurate prediction model with SUV max as a predictor variable was compared with 6 different response prediction models constructed using least absolute shrinkage and selection operator regularized logistic regression. Internal validation was performed to estimate the model's performances. Pathologic response was defined as complete versus incomplete response (Mandard tumor regression grade system 1 vs. 2-5). Results: Pathologic examination revealed 19 (19.6%) complete and 78 (80.4%) incomplete responders. Least absolute shrinkage and selection operator regularization selected the clinical parameters: histologic type and clinical T stage, the 18 F-FDG PET-derived textural feature long run low gray level emphasis, and the CT-derived textural feature run percentage. Introducing these variables to a logistic regression analysis showed areas under the receiver-operating-characteristic curve (AUCs) of 0.78 compared with 0.58 in the SUV max model. The discrimination slopes were 0.17 compared with 0.01, respectively. After internal validation, the AUCs decreased to 0.74 and 0.54, respectively. Conclusion: The predictive values of the constructed models were superior to the standard method (SUV max ). These results can be considered as an initial step in predicting tumor response to nCRT in locally advanced EC. Further research in refining the predictive value of these models is needed to justify omission of surgery. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  6. Deep Flare Net (DeFN) Model for Solar Flare Prediction

    NASA Astrophysics Data System (ADS)

    Nishizuka, N.; Sugiura, K.; Kubo, Y.; Den, M.; Ishii, M.

    2018-05-01

    We developed a solar flare prediction model using a deep neural network (DNN) named Deep Flare Net (DeFN). This model can calculate the probability of flares occurring in the following 24 hr in each active region, which is used to determine the most likely maximum classes of flares via a binary classification (e.g., ≥M class versus

  7. Cancer imaging phenomics toolkit: quantitative imaging analytics for precision diagnostics and predictive modeling of clinical outcome.

    PubMed

    Davatzikos, Christos; Rathore, Saima; Bakas, Spyridon; Pati, Sarthak; Bergman, Mark; Kalarot, Ratheesh; Sridharan, Patmaa; Gastounioti, Aimilia; Jahani, Nariman; Cohen, Eric; Akbari, Hamed; Tunc, Birkan; Doshi, Jimit; Parker, Drew; Hsieh, Michael; Sotiras, Aristeidis; Li, Hongming; Ou, Yangming; Doot, Robert K; Bilello, Michel; Fan, Yong; Shinohara, Russell T; Yushkevich, Paul; Verma, Ragini; Kontos, Despina

    2018-01-01

    The growth of multiparametric imaging protocols has paved the way for quantitative imaging phenotypes that predict treatment response and clinical outcome, reflect underlying cancer molecular characteristics and spatiotemporal heterogeneity, and can guide personalized treatment planning. This growth has underlined the need for efficient quantitative analytics to derive high-dimensional imaging signatures of diagnostic and predictive value in this emerging era of integrated precision diagnostics. This paper presents cancer imaging phenomics toolkit (CaPTk), a new and dynamically growing software platform for analysis of radiographic images of cancer, currently focusing on brain, breast, and lung cancer. CaPTk leverages the value of quantitative imaging analytics along with machine learning to derive phenotypic imaging signatures, based on two-level functionality. First, image analysis algorithms are used to extract comprehensive panels of diverse and complementary features, such as multiparametric intensity histogram distributions, texture, shape, kinetics, connectomics, and spatial patterns. At the second level, these quantitative imaging signatures are fed into multivariate machine learning models to produce diagnostic, prognostic, and predictive biomarkers. Results from clinical studies in three areas are shown: (i) computational neuro-oncology of brain gliomas for precision diagnostics, prediction of outcome, and treatment planning; (ii) prediction of treatment response for breast and lung cancer, and (iii) risk assessment for breast cancer.

  8. Evaluating stability of histomorphometric features across scanner and staining variations: predicting biochemical recurrence from prostate cancer whole slide images

    NASA Astrophysics Data System (ADS)

    Leo, Patrick; Lee, George; Madabhushi, Anant

    2016-03-01

    Quantitative histomorphometry (QH) is the process of computerized extraction of features from digitized tissue slide images. Typically these features are used in machine learning classifiers to predict disease presence, behavior and outcome. Successful robust classifiers require features that both discriminate between classes of interest and are stable across data from multiple sites. Feature stability may be compromised by variation in slide staining and scanning procedures. These laboratory specific variables include dye batch, slice thickness and the whole slide scanner used to digitize the slide. The key therefore is to be able to identify features that are not only discriminating between the classes of interest (e.g. cancer and non-cancer or biochemical recurrence and non- recurrence) but also features that will not wildly fluctuate on slides representing the same tissue class but from across multiple different labs and sites. While there has been some recent efforts at understanding feature stability in the context of radiomics applications (i.e. feature analysis of radiographic images), relatively few attempts have been made at studying the trade-off between feature stability and discriminability for histomorphometric and digital pathology applications. In this paper we present two new measures, preparation-induced instability score (PI) and latent instability score (LI), to quantify feature instability across and within datasets. Dividing PI by LI yields a ratio for how often a feature for a specific tissue class (e.g. low grade prostate cancer) is different between datasets from different sites versus what would be expected from random chance alone. Using this ratio we seek to quantify feature vulnerability to variations in slide preparation and digitization. Since our goal is to identify stable QH features we evaluate these features for their stability and thus inclusion in machine learning based classifiers in a use case involving prostate cancer. Specifically we examine QH features which may predict 5 year biochemical recurrence for prostate cancer patients who have undergone radical prostatectomy from digital slide images of surgically excised tissue specimens, 5 year biochemical recurrence being a strong predictor of disease recurrence. In this study we evaluated the ability of our feature robustness indices to identify the most stable and predictive features of 5 year biochemical recurrence using digitized slide images of surgically excised prostate cancer specimens from 80 different patients across 4 different sites. A total of 242 features from 5 different feature families were investigated to identify the most stable QH features from our set. Our feature robustness indices (PI and LI) suggested that five feature families (graph, shape, co-occurring gland tensors, gland sub-graphs, texture) were susceptible to variations in slide preparation and digitization across various sites. The family least affected was shape features in which 19.3% of features varied across laboratories while the most vulnerable family, at 55.6%, was the gland disorder features. However the disorder features were the most stable within datasets being different between random halves of a dataset in an average of just 4.1% of comparisons while texture features were the most unstable being different at a rate of 4.7%. We also compared feature stability across two datasets before and after color normalization. Color normalization decreased feature stability with 8% and 34% of features different between the two datasets in two outcome groups prior to normalization and 49% and 51% different afterwards. Our results appear to suggest that evaluation of QH features across multiple sites needs to be undertaken to assess robustness and class discriminability alone should not represent the benchmark for selection of QH features to build diagnostic and prognostic digital pathology classifiers.

  9. Covering Jupiter from Earth and Space

    NASA Image and Video Library

    2011-08-03

    Ground-based astronomers will be playing a vital role in NASA Juno mission. Images from the amateur astronomy community are needed to help the JunoCam instrument team predict what features will be visible when the camera images are taken.

  10. Features Extraction of Flotation Froth Images and BP Neural Network Soft-Sensor Model of Concentrate Grade Optimized by Shuffled Cuckoo Searching Algorithm

    PubMed Central

    Wang, Jie-sheng; Han, Shuang; Shen, Na-na; Li, Shu-xia

    2014-01-01

    For meeting the forecasting target of key technology indicators in the flotation process, a BP neural network soft-sensor model based on features extraction of flotation froth images and optimized by shuffled cuckoo search algorithm is proposed. Based on the digital image processing technique, the color features in HSI color space, the visual features based on the gray level cooccurrence matrix, and the shape characteristics based on the geometric theory of flotation froth images are extracted, respectively, as the input variables of the proposed soft-sensor model. Then the isometric mapping method is used to reduce the input dimension, the network size, and learning time of BP neural network. Finally, a shuffled cuckoo search algorithm is adopted to optimize the BP neural network soft-sensor model. Simulation results show that the model has better generalization results and prediction accuracy. PMID:25133210

  11. Survival Prediction in Pancreatic Ductal Adenocarcinoma by Quantitative Computed Tomography Image Analysis.

    PubMed

    Attiyeh, Marc A; Chakraborty, Jayasree; Doussot, Alexandre; Langdon-Embry, Liana; Mainarich, Shiana; Gönen, Mithat; Balachandran, Vinod P; D'Angelica, Michael I; DeMatteo, Ronald P; Jarnagin, William R; Kingham, T Peter; Allen, Peter J; Simpson, Amber L; Do, Richard K

    2018-04-01

    Pancreatic cancer is a highly lethal cancer with no established a priori markers of survival. Existing nomograms rely mainly on post-resection data and are of limited utility in directing surgical management. This study investigated the use of quantitative computed tomography (CT) features to preoperatively assess survival for pancreatic ductal adenocarcinoma (PDAC) patients. A prospectively maintained database identified consecutive chemotherapy-naive patients with CT angiography and resected PDAC between 2009 and 2012. Variation in CT enhancement patterns was extracted from the tumor region using texture analysis, a quantitative image analysis tool previously described in the literature. Two continuous survival models were constructed, with 70% of the data (training set) using Cox regression, first based only on preoperative serum cancer antigen (CA) 19-9 levels and image features (model A), and then on CA19-9, image features, and the Brennan score (composite pathology score; model B). The remaining 30% of the data (test set) were reserved for independent validation. A total of 161 patients were included in the analysis. Training and test sets contained 113 and 48 patients, respectively. Quantitative image features combined with CA19-9 achieved a c-index of 0.69 [integrated Brier score (IBS) 0.224] on the test data, while combining CA19-9, imaging, and the Brennan score achieved a c-index of 0.74 (IBS 0.200) on the test data. We present two continuous survival prediction models for resected PDAC patients. Quantitative analysis of CT texture features is associated with overall survival. Further work includes applying the model to an external dataset to increase the sample size for training and to determine its applicability.

  12. Conventional MRI features for predicting the clinical outcome of patients with invasive placenta

    PubMed Central

    Chen, Ting; Xu, Xiao-Quan; Shi, Hai-Bin; Yang, Zheng-Qiang; Zhou, Xin; Pan, Yi

    2017-01-01

    PURPOSE We aimed to evaluate whether morphologic magnetic resonance imaging (MRI) features could help to predict the maternal outcome after uterine artery embolization (UAE)-assisted cesarean section (CS) in patients with invasive placenta previa. METHODS We retrospectively reviewed the MRI data of 40 pregnant women who have undergone UAE-assisted cesarean section due to suspected high risk of massive hemorrhage caused by invasive placenta previa. Patients were divided into two groups based on the maternal outcome (good-outcome group: minor hemorrhage and uterus preserved; poor-outcome group: significant hemorrhage or emergency hysterectomy). Morphologic MRI features were compared between the two groups. Multivariate logistic regression analysis was used to identify the most valuable variables, and predictive value of the identified risk factor was determined. RESULTS Low signal intensity bands on T2-weighted imaging (P < 0.001), placenta percreta (P = 0.011), and placental cervical protrusion sign (P = 0.002) were more frequently observed in patients with poor outcome. Low signal intensity bands on T2-weighted imaging was the only significant predictor of poor maternal outcome in multivariate analysis (P = 0.020; odds ratio, 14.79), with 81.3% sensitivity and 84.3% specificity. CONCLUSION Low signal intensity bands on T2-weighted imaging might be a predictor of poor maternal outcome after UAE-assisted cesarean section in patients with invasive placenta previa. PMID:28345524

  13. Generic decoding of seen and imagined objects using hierarchical visual features.

    PubMed

    Horikawa, Tomoyasu; Kamitani, Yukiyasu

    2017-05-22

    Object recognition is a key function in both human and machine vision. While brain decoding of seen and imagined objects has been achieved, the prediction is limited to training examples. We present a decoding approach for arbitrary objects using the machine vision principle that an object category is represented by a set of features rendered invariant through hierarchical processing. We show that visual features, including those derived from a deep convolutional neural network, can be predicted from fMRI patterns, and that greater accuracy is achieved for low-/high-level features with lower-/higher-level visual areas, respectively. Predicted features are used to identify seen/imagined object categories (extending beyond decoder training) from a set of computed features for numerous object images. Furthermore, decoding of imagined objects reveals progressive recruitment of higher-to-lower visual representations. Our results demonstrate a homology between human and machine vision and its utility for brain-based information retrieval.

  14. Quantitative Analysis of {sup 18}F-Fluorodeoxyglucose Positron Emission Tomography Identifies Novel Prognostic Imaging Biomarkers in Locally Advanced Pancreatic Cancer Patients Treated With Stereotactic Body Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Yi; Global Institution for Collaborative Research and Education, Hokkaido University, Sapporo; Song, Jie

    Purpose: To identify prognostic biomarkers in pancreatic cancer using high-throughput quantitative image analysis. Methods and Materials: In this institutional review board–approved study, we retrospectively analyzed images and outcomes for 139 locally advanced pancreatic cancer patients treated with stereotactic body radiation therapy (SBRT). The overall population was split into a training cohort (n=90) and a validation cohort (n=49) according to the time of treatment. We extracted quantitative imaging characteristics from pre-SBRT {sup 18}F-fluorodeoxyglucose positron emission tomography, including statistical, morphologic, and texture features. A Cox proportional hazard regression model was built to predict overall survival (OS) in the training cohort using 162more » robust image features. To avoid over-fitting, we applied the elastic net to obtain a sparse set of image features, whose linear combination constitutes a prognostic imaging signature. Univariate and multivariate Cox regression analyses were used to evaluate the association with OS, and concordance index (CI) was used to evaluate the survival prediction accuracy. Results: The prognostic imaging signature included 7 features characterizing different tumor phenotypes, including shape, intensity, and texture. On the validation cohort, univariate analysis showed that this prognostic signature was significantly associated with OS (P=.002, hazard ratio 2.74), which improved upon conventional imaging predictors including tumor volume, maximum standardized uptake value, and total legion glycolysis (P=.018-.028, hazard ratio 1.51-1.57). On multivariate analysis, the proposed signature was the only significant prognostic index (P=.037, hazard ratio 3.72) when adjusted for conventional imaging and clinical factors (P=.123-.870, hazard ratio 0.53-1.30). In terms of CI, the proposed signature scored 0.66 and was significantly better than competing prognostic indices (CI 0.48-0.64, Wilcoxon rank sum test P<1e-6). Conclusion: Quantitative analysis identified novel {sup 18}F-fluorodeoxyglucose positron emission tomography image features that showed improved prognostic value over conventional imaging metrics. If validated in large, prospective cohorts, the new prognostic signature might be used to identify patients for individualized risk-adaptive therapy.« less

  15. Searching for a traveling feature in Saturn's rings in Cassini Imaging Science Subsystem data

    NASA Astrophysics Data System (ADS)

    Aye, Klaus-Michael; Rehnberg, Morgan; Brown, Zarah; Esposito, Larry W.

    2016-10-01

    Introduction: Using Cassini UVIS occultation data, a traveling wave feature has been identified in the Saturn rings that is most likely caused by the radial positions swap of the moons Janus and Epimetheus [1]. The hypothesis is that non-linear interferences between the linear density waves when being relocated by the moon swap create a solitary wave that is traveling outward through the rings. The observations in [1] further lead to the derivation of values for the radial travel speeds of the identified traveling features, from 39.6 km/yr for the Janus 5:4 resonance up to 45.8 for the Janus 4:3 resonance.Previous confirmations in ISS data: Work in [1] also identified the feature in Cassini Imaging Science Subsystem (ISS) data that was taken around the time of the UVIS occultations where the phenomenon was first discovered, so far one ISS image for each Janus resonances 2:1, 4:3, 5:4, and 6:5.Search guided by predicted locations: Using the observation-fitted radial velocities from [1], we can extrapolate these to identify Saturn radii at which the traveling feature should be found at later times. Using this and new image analysis and plotting tools available in [2], we have identified a potential candidate feature in an ISS image that was taken 2.5 years after the feature causing moon swap in January 2006. We intend to expand our search by identifying candidate ISS data by a meta-database search constraining the radius at future times corresponding to the predicted future locations of the hypothesized solitary wave and present our findings at this conference.References: [1] Rehnberg, M.E., Esposito, L.W., Brown, Z.L., Albers, N., Sremčević, M., Stewart, G.R., 2016. A Traveling Feature in Saturn's Rings. Icarus, accepted in June 2016. [2] K.-Michael Aye. (2016). pyciss: v0.5.0. Zenodo. 10.5281/zenodo.53092

  16. Low bandwidth eye tracker for scanning laser ophthalmoscopy

    NASA Astrophysics Data System (ADS)

    Harvey, Zachary G.; Dubra, Alfredo; Cahill, Nathan D.; Lopez Alarcon, Sonia

    2012-02-01

    The incorporation of adaptive optics to scanning ophthalmoscopes (AOSOs) has allowed for in vivo, noninvasive imaging of the human rod and cone photoreceptor mosaics. Light safety restrictions and power limitations of the current low-coherence light sources available for imaging result in each individual raw image having a low signal to noise ratio (SNR). To date, the only approach used to increase the SNR has been to collect large number of raw images (N >50), to register them to remove the distortions due to involuntary eye motion, and then to average them. The large amplitude of involuntary eye motion with respect to the AOSO field of view (FOV) dictates that an even larger number of images need to be collected at each retinal location to ensure adequate SNR over the feature of interest. Compensating for eye motion during image acquisition to keep the feature of interest within the FOV could reduce the number of raw frames required per retinal feature, therefore significantly reduce the imaging time, storage requirements, post-processing times and, more importantly, subject's exposure to light. In this paper, we present a particular implementation of an AOSO, termed the adaptive optics scanning light ophthalmoscope (AOSLO) equipped with a simple eye tracking system capable of compensating for eye drift by estimating the eye motion from the raw frames and by using a tip-tilt mirror to compensate for it in a closed-loop. Multiple control strategies were evaluated to minimize the image distortion introduced by the tracker itself. Also, linear, quadratic and Kalman filter motion prediction algorithms were implemented and tested and tested using both simulated motion (sinusoidal motion with varying frequencies) and human subjects. The residual displacement of the retinal features was used to compare the performance of the different correction strategies and prediction methods.

  17. Expectation and Surprise Determine Neural Population Responses in the Ventral Visual Stream

    PubMed Central

    Egner, Tobias; Monti, Jim M.; Summerfield, Christopher

    2014-01-01

    Visual cortex is traditionally viewed as a hierarchy of neural feature detectors, with neural population responses being driven by bottom-up stimulus features. Conversely, “predictive coding” models propose that each stage of the visual hierarchy harbors two computationally distinct classes of processing unit: representational units that encode the conditional probability of a stimulus and provide predictions to the next lower level; and error units that encode the mismatch between predictions and bottom-up evidence, and forward prediction error to the next higher level. Predictive coding therefore suggests that neural population responses in category-selective visual regions, like the fusiform face area (FFA), reflect a summation of activity related to prediction (“face expectation”) and prediction error (“face surprise”), rather than a homogenous feature detection response. We tested the rival hypotheses of the feature detection and predictive coding models by collecting functional magnetic resonance imaging data from the FFA while independently varying both stimulus features (faces vs houses) and subjects’ perceptual expectations regarding those features (low vs medium vs high face expectation). The effects of stimulus and expectation factors interacted, whereby FFA activity elicited by face and house stimuli was indistinguishable under high face expectation and maximally differentiated under low face expectation. Using computational modeling, we show that these data can be explained by predictive coding but not by feature detection models, even when the latter are augmented with attentional mechanisms. Thus, population responses in the ventral visual stream appear to be determined by feature expectation and surprise rather than by stimulus features per se. PMID:21147999

  18. Brain properties predict proximity to symptom onset in sporadic Alzheimer's disease.

    PubMed

    Vogel, Jacob W; Vachon-Presseau, Etienne; Pichet Binette, Alexa; Tam, Angela; Orban, Pierre; La Joie, Renaud; Savard, Mélissa; Picard, Cynthia; Poirier, Judes; Bellec, Pierre; Breitner, John C S; Villeneuve, Sylvia

    2018-06-01

    See Tijms and Visser (doi:10.1093/brain/awy113) for a scientific commentary on this article.Alzheimer's disease is preceded by a lengthy 'preclinical' stage spanning many years, during which subtle brain changes occur in the absence of overt cognitive symptoms. Predicting when the onset of disease symptoms will occur is an unsolved challenge in individuals with sporadic Alzheimer's disease. In individuals with autosomal dominant genetic Alzheimer's disease, the age of symptom onset is similar across generations, allowing the prediction of individual onset times with some accuracy. We extend this concept to persons with a parental history of sporadic Alzheimer's disease to test whether an individual's symptom onset age can be informed by the onset age of their affected parent, and whether this estimated onset age can be predicted using only MRI. Structural and functional MRIs were acquired from 255 ageing cognitively healthy subjects with a parental history of sporadic Alzheimer's disease from the PREVENT-AD cohort. Years to estimated symptom onset was calculated as participant age minus age of parental symptom onset. Grey matter volume was extracted from T1-weighted images and whole-brain resting state functional connectivity was evaluated using degree count. Both modalities were summarized using a 444-region cortical-subcortical atlas. The entire sample was divided into training (n = 138) and testing (n = 68) sets. Within the training set, individuals closer to or beyond their parent's symptom onset demonstrated reduced grey matter volume and altered functional connectivity, specifically in regions known to be vulnerable in Alzheimer's disease. Machine learning was used to identify a weighted set of imaging features trained to predict years to estimated symptom onset. This feature set alone significantly predicted years to estimated symptom onset in the unseen testing data. This model, using only neuroimaging features, significantly outperformed a similar model instead trained with cognitive, genetic, imaging and demographic features used in a traditional clinical setting. We next tested if these brain properties could be generalized to predict time to clinical progression in a subgroup of 26 individuals from the Alzheimer's Disease Neuroimaging Initiative, who eventually converted either to mild cognitive impairment or to Alzheimer's dementia. The feature set trained on years to estimated symptom onset in the PREVENT-AD predicted variance in time to clinical conversion in this separate longitudinal dataset. Adjusting for participant age did not impact any of the results. These findings demonstrate that years to estimated symptom onset or similar measures can be predicted from brain features and may help estimate presymptomatic disease progression in at-risk individuals.

  19. Response Classification Images in Vernier Acuity

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Beard, B. L.; Ellis, Stephen R. (Technical Monitor)

    1997-01-01

    Orientation selective and local sign mechanisms have been proposed as the basis for vernier acuity judgments. Linear image features contributing to discrimination can be determined for a two choice task by adding external noise to the images and then averaging the noises separately for the four types of stimulus/response trials. This method is applied to a vernier acuity task with different spatial separations to compare the predictions of the two theories. Three well-practiced observers were presented around 5000 trials of a vernier stimulus consisting of two dark horizontal lines (5 min by 0.3 min) within additive low-contrast white noise. Two spatial separations were tested, abutting and a 10 min horizontal separation. The task was to determine whether the target lines were aligned or vertically offset. The noises were averaged separately for the four stimulus/response trial types (e.g., stimulus = offset, response = aligned). The sum of the two 'not aligned' images was then subtracted from the sum of the 'aligned' images to obtain an overall image. Spatially smoothed images were quantized according to expected variability in the smoothed images to allow estimation of the statistical significance of image features. The response images from the 10 min separation condition are consistent with the local sign theory, having the appearance of two linear operators measuring vertical position with opposite sign. The images from the abutting stimulus have the same appearance with the two operators closer together. The image predicted by an oriented filter model is similar, but has its greatest weight in the abutting region, while the response images fall to nonsignificance there. The response correlation image method, previously demonstrated for letter discrimination, clarifies the features used in vernier acuity.

  20. Learning Computational Models of Video Memorability from fMRI Brain Imaging.

    PubMed

    Han, Junwei; Chen, Changyuan; Shao, Ling; Hu, Xintao; Han, Jungong; Liu, Tianming

    2015-08-01

    Generally, various visual media are unequally memorable by the human brain. This paper looks into a new direction of modeling the memorability of video clips and automatically predicting how memorable they are by learning from brain functional magnetic resonance imaging (fMRI). We propose a novel computational framework by integrating the power of low-level audiovisual features and brain activity decoding via fMRI. Initially, a user study experiment is performed to create a ground truth database for measuring video memorability and a set of effective low-level audiovisual features is examined in this database. Then, human subjects' brain fMRI data are obtained when they are watching the video clips. The fMRI-derived features that convey the brain activity of memorizing videos are extracted using a universal brain reference system. Finally, due to the fact that fMRI scanning is expensive and time-consuming, a computational model is learned on our benchmark dataset with the objective of maximizing the correlation between the low-level audiovisual features and the fMRI-derived features using joint subspace learning. The learned model can then automatically predict the memorability of videos without fMRI scans. Evaluations on publically available image and video databases demonstrate the effectiveness of the proposed framework.

  1. Prediction of breast cancer risk using a machine learning approach embedded with a locality preserving projection algorithm.

    PubMed

    Heidari, Morteza; Khuzani, Abolfazl Zargari; Hollingsworth, Alan B; Danala, Gopichandh; Mirniaharikandehei, Seyedehnafiseh; Qiu, Yuchen; Liu, Hong; Zheng, Bin

    2018-01-30

    In order to automatically identify a set of effective mammographic image features and build an optimal breast cancer risk stratification model, this study aims to investigate advantages of applying a machine learning approach embedded with a locally preserving projection (LPP) based feature combination and regeneration algorithm to predict short-term breast cancer risk. A dataset involving negative mammograms acquired from 500 women was assembled. This dataset was divided into two age-matched classes of 250 high risk cases in which cancer was detected in the next subsequent mammography screening and 250 low risk cases, which remained negative. First, a computer-aided image processing scheme was applied to segment fibro-glandular tissue depicted on mammograms and initially compute 44 features related to the bilateral asymmetry of mammographic tissue density distribution between left and right breasts. Next, a multi-feature fusion based machine learning classifier was built to predict the risk of cancer detection in the next mammography screening. A leave-one-case-out (LOCO) cross-validation method was applied to train and test the machine learning classifier embedded with a LLP algorithm, which generated a new operational vector with 4 features using a maximal variance approach in each LOCO process. Results showed a 9.7% increase in risk prediction accuracy when using this LPP-embedded machine learning approach. An increased trend of adjusted odds ratios was also detected in which odds ratios increased from 1.0 to 11.2. This study demonstrated that applying the LPP algorithm effectively reduced feature dimensionality, and yielded higher and potentially more robust performance in predicting short-term breast cancer risk.

  2. Prediction of breast cancer risk using a machine learning approach embedded with a locality preserving projection algorithm

    NASA Astrophysics Data System (ADS)

    Heidari, Morteza; Zargari Khuzani, Abolfazl; Hollingsworth, Alan B.; Danala, Gopichandh; Mirniaharikandehei, Seyedehnafiseh; Qiu, Yuchen; Liu, Hong; Zheng, Bin

    2018-02-01

    In order to automatically identify a set of effective mammographic image features and build an optimal breast cancer risk stratification model, this study aims to investigate advantages of applying a machine learning approach embedded with a locally preserving projection (LPP) based feature combination and regeneration algorithm to predict short-term breast cancer risk. A dataset involving negative mammograms acquired from 500 women was assembled. This dataset was divided into two age-matched classes of 250 high risk cases in which cancer was detected in the next subsequent mammography screening and 250 low risk cases, which remained negative. First, a computer-aided image processing scheme was applied to segment fibro-glandular tissue depicted on mammograms and initially compute 44 features related to the bilateral asymmetry of mammographic tissue density distribution between left and right breasts. Next, a multi-feature fusion based machine learning classifier was built to predict the risk of cancer detection in the next mammography screening. A leave-one-case-out (LOCO) cross-validation method was applied to train and test the machine learning classifier embedded with a LLP algorithm, which generated a new operational vector with 4 features using a maximal variance approach in each LOCO process. Results showed a 9.7% increase in risk prediction accuracy when using this LPP-embedded machine learning approach. An increased trend of adjusted odds ratios was also detected in which odds ratios increased from 1.0 to 11.2. This study demonstrated that applying the LPP algorithm effectively reduced feature dimensionality, and yielded higher and potentially more robust performance in predicting short-term breast cancer risk.

  3. Short-term solar flare prediction using image-case-based reasoning

    NASA Astrophysics Data System (ADS)

    Liu, Jin-Fu; Li, Fei; Zhang, Huai-Peng; Yu, Da-Ren

    2017-10-01

    Solar flares strongly influence space weather and human activities, and their prediction is highly complex. The existing solutions such as data based approaches and model based approaches have a common shortcoming which is the lack of human engagement in the forecasting process. An image-case-based reasoning method is introduced to achieve this goal. The image case library is composed of SOHO/MDI longitudinal magnetograms, the images from which exhibit the maximum horizontal gradient, the length of the neutral line and the number of singular points that are extracted for retrieving similar image cases. Genetic optimization algorithms are employed for optimizing the weight assignment for image features and the number of similar image cases retrieved. Similar image cases and prediction results derived by majority voting for these similar image cases are output and shown to the forecaster in order to integrate his/her experience with the final prediction results. Experimental results demonstrate that the case-based reasoning approach has slightly better performance than other methods, and is more efficient with forecasts improved by humans.

  4. Automatic machine learning based prediction of cardiovascular events in lung cancer screening data

    NASA Astrophysics Data System (ADS)

    de Vos, Bob D.; de Jong, Pim A.; Wolterink, Jelmer M.; Vliegenthart, Rozemarijn; Wielingen, Geoffrey V. F.; Viergever, Max A.; Išgum, Ivana

    2015-03-01

    Calcium burden determined in CT images acquired in lung cancer screening is a strong predictor of cardiovascular events (CVEs). This study investigated whether subjects undergoing such screening who are at risk of a CVE can be identified using automatic image analysis and subject characteristics. Moreover, the study examined whether these individuals can be identified using solely image information, or if a combination of image and subject data is needed. A set of 3559 male subjects undergoing Dutch-Belgian lung cancer screening trial was included. Low-dose non-ECG synchronized chest CT images acquired at baseline were analyzed (1834 scanned in the University Medical Center Groningen, 1725 in the University Medical Center Utrecht). Aortic and coronary calcifications were identified using previously developed automatic algorithms. A set of features describing number, volume and size distribution of the detected calcifications was computed. Age of the participants was extracted from image headers. Features describing participants' smoking status, smoking history and past CVEs were obtained. CVEs that occurred within three years after the imaging were used as outcome. Support vector machine classification was performed employing different feature sets using sets of only image features, or a combination of image and subject related characteristics. Classification based solely on the image features resulted in the area under the ROC curve (Az) of 0.69. A combination of image and subject features resulted in an Az of 0.71. The results demonstrate that subjects undergoing lung cancer screening who are at risk of CVE can be identified using automatic image analysis. Adding subject information slightly improved the performance.

  5. Evaluation of quantitative image analysis criteria for the high-resolution microendoscopic detection of neoplasia in Barrett's esophagus

    NASA Astrophysics Data System (ADS)

    Muldoon, Timothy J.; Thekkek, Nadhi; Roblyer, Darren; Maru, Dipen; Harpaz, Noam; Potack, Jonathan; Anandasabapathy, Sharmila; Richards-Kortum, Rebecca

    2010-03-01

    Early detection of neoplasia in patients with Barrett's esophagus is essential to improve outcomes. The aim of this ex vivo study was to evaluate the ability of high-resolution microendoscopic imaging and quantitative image analysis to identify neoplastic lesions in patients with Barrett's esophagus. Nine patients with pathologically confirmed Barrett's esophagus underwent endoscopic examination with biopsies or endoscopic mucosal resection. Resected fresh tissue was imaged with fiber bundle microendoscopy; images were analyzed by visual interpretation or by quantitative image analysis to predict whether the imaged sites were non-neoplastic or neoplastic. The best performing pair of quantitative features were chosen based on their ability to correctly classify the data into the two groups. Predictions were compared to the gold standard of histopathology. Subjective analysis of the images by expert clinicians achieved average sensitivity and specificity of 87% and 61%, respectively. The best performing quantitative classification algorithm relied on two image textural features and achieved a sensitivity and specificity of 87% and 85%, respectively. This ex vivo pilot trial demonstrates that quantitative analysis of images obtained with a simple microendoscope system can distinguish neoplasia in Barrett's esophagus with good sensitivity and specificity when compared to histopathology and to subjective image interpretation.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parekh, V; Jacobs, MA

    Purpose: Multiparametric radiological imaging is used for diagnosis in patients. Potentially extracting useful features specific to a patient’s pathology would be crucial step towards personalized medicine and assessing treatment options. In order to automatically extract features directly from multiparametric radiological imaging datasets, we developed an advanced unsupervised machine learning algorithm called the multidimensional imaging radiomics-geodesics(MIRaGe). Methods: Seventy-six breast tumor patients underwent 3T MRI breast imaging were used for this study. We tested the MIRaGe algorithm to extract features for classification of breast tumors into benign or malignant. The MRI parameters used were T1-weighted, T2-weighted, dynamic contrast enhanced MR imaging (DCE-MRI)more » and diffusion weighted imaging(DWI). The MIRaGe algorithm extracted the radiomics-geodesics features (RGFs) from multiparametric MRI datasets. This enable our method to learn the intrinsic manifold representations corresponding to the patients. To determine the informative RGF, a modified Isomap algorithm(t-Isomap) was created for a radiomics-geodesics feature space(tRGFS) to avoid overfitting. Final classification was performed using SVM. The predictive power of the RGFs was tested and validated using k-fold cross validation. Results: The RGFs extracted by the MIRaGe algorithm successfully classified malignant lesions from benign lesions with a sensitivity of 93% and a specificity of 91%. The top 50 RGFs identified as the most predictive by the t-Isomap procedure were consistent with the radiological parameters known to be associated with breast cancer diagnosis and were categorized as kinetic curve characterizing RGFs, wash-in rate characterizing RGFs, wash-out rate characterizing RGFs and morphology characterizing RGFs. Conclusion: In this paper, we developed a novel feature extraction algorithm for multiparametric radiological imaging. The results demonstrated the power of the MIRaGe algorithm at automatically discovering useful feature representations directly from the raw multiparametric MRI data. In conclusion, the MIRaGe informatics model provides a powerful tool with applicability in cancer diagnosis and a possibility of extension to other kinds of pathologies. NIH (P50CA103175, 5P30CA006973 (IRAT), R01CA190299, U01CA140204), Siemens Medical Systems (JHU-2012-MR-86-01) and Nivida Graphics Corporation.« less

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galavis, P; Friedman, K; Chandarana, H

    Purpose: Radiomics involves the extraction of texture features from different imaging modalities with the purpose of developing models to predict patient treatment outcomes. The purpose of this study is to investigate texture feature reproducibility across [18F]FDG PET/CT and [18F]FDG PET/MR imaging in patients with primary malignancies. Methods: Twenty five prospective patients with solid tumors underwent clinical [18F]FDG PET/CT scan followed by [18F]FDG PET/MR scans. In all patients the lesions were identified using nuclear medicine reports. The images were co-registered and segmented using an in-house auto-segmentation method. Fifty features, based on the intensity histogram, second and high order matrices, were extractedmore » from the segmented regions from both image data sets. One-way random-effects ANOVA model of the intra-class correlation coefficient (ICC) was used to establish texture feature correlations between both data sets. Results: Fifty features were classified based on their ICC values, which were found in the range from 0.1 to 0.86, in three categories: high, intermediate, and low. Ten features extracted from second and high-order matrices showed large ICC ≥ 0.70. Seventeen features presented intermediate 0.5 ≤ ICC ≤ 0.65 and the remaining twenty three presented low ICC ≤ 0.45. Conclusion: Features with large ICC values could be reliable candidates for quantification as they lead to similar results from both imaging modalities. Features with small ICC indicates a lack of correlation. Therefore, the use of these features as a quantitative measure will lead to different assessments of the same lesion depending on the imaging modality from where they are extracted. This study shows the importance of the need for further investigation and standardization of features across multiple imaging modalities.« less

  8. Glioma survival prediction with the combined analysis of in vivo 11C-MET-PET, ex vivo and patient features by supervised machine learning.

    PubMed

    Papp, Laszlo; Poetsch, Nina; Grahovac, Marko; Schmidbauer, Victor; Woehrer, Adelheid; Preusser, Matthias; Mitterhauser, Markus; Kiesel, Barbara; Wadsak, Wolfgang; Beyer, Thomas; Hacker, Marcus; Traub-Weidinger, Tatjana

    2017-11-24

    Gliomas are the most common types of tumors in the brain. While the definite diagnosis is routinely made ex vivo by histopathologic and molecular examination, diagnostic work-up of patients with suspected glioma is mainly done by using magnetic resonance imaging (MRI). Nevertheless, L-S-methyl- 11 C-methionine ( 11 C-MET) Positron Emission Tomography (PET) holds a great potential in characterization of gliomas. The aim of this study was to establish machine learning (ML) driven survival models for glioma built on 11 C-MET-PET, ex vivo and patient characteristics. Methods: 70 patients with a treatment naïve glioma, who had a positive 11 C-MET-PET and histopathology-derived ex vivo feature extraction, such as World Health Organization (WHO) 2007 tumor grade, histology and isocitrate dehydrogenase (IDH1-R132H) mutation status were included. The 11 C-MET-positive primary tumors were delineated semi-automatically on PET images followed by the feature extraction of tumor-to-background ratio based general and higher-order textural features by applying five different binning approaches. In vivo and ex vivo features, as well as patient characteristics (age, weight, height, body-mass-index, Karnofsky-score) were merged to characterize the tumors. Machine learning approaches were utilized to identify relevant in vivo, ex vivo and patient features and their relative weights for 36 months survival prediction. The resulting feature weights were used to establish three predictive models per binning configuration based on a combination of: in vivo/ex vivo and clinical patient information (M36IEP), in vivo and patient-only information (M36IP), and in vivo only (M36I). In addition a binning-independent ex vivo and patient-only (M36EP) model was created. The established models were validated in a Monte Carlo (MC) cross-validation scheme. Results: Most prominent ML-selected and -weighted features were patient and ex vivo based followed by in vivo features. The highest area under the curve (AUC) values of our models as revealed by the MC cross-validation were: 0.9 (M36IEP), 0.87 (M36EP), 0.77 (M36IP) and 0.72 (M36I). Conclusion: Survival prediction of glioma patients based on amino acid PET using computer-supported predictive models based on in vivo, ex vivo and patient features is highly accurate. Copyright © 2017 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  9. Improving protein fold recognition by extracting fold-specific features from predicted residue-residue contacts.

    PubMed

    Zhu, Jianwei; Zhang, Haicang; Li, Shuai Cheng; Wang, Chao; Kong, Lupeng; Sun, Shiwei; Zheng, Wei-Mou; Bu, Dongbo

    2017-12-01

    Accurate recognition of protein fold types is a key step for template-based prediction of protein structures. The existing approaches to fold recognition mainly exploit the features derived from alignments of query protein against templates. These approaches have been shown to be successful for fold recognition at family level, but usually failed at superfamily/fold levels. To overcome this limitation, one of the key points is to explore more structurally informative features of proteins. Although residue-residue contacts carry abundant structural information, how to thoroughly exploit these information for fold recognition still remains a challenge. In this study, we present an approach (called DeepFR) to improve fold recognition at superfamily/fold levels. The basic idea of our approach is to extract fold-specific features from predicted residue-residue contacts of proteins using deep convolutional neural network (DCNN) technique. Based on these fold-specific features, we calculated similarity between query protein and templates, and then assigned query protein with fold type of the most similar template. DCNN has showed excellent performance in image feature extraction and image recognition; the rational underlying the application of DCNN for fold recognition is that contact likelihood maps are essentially analogy to images, as they both display compositional hierarchy. Experimental results on the LINDAHL dataset suggest that even using the extracted fold-specific features alone, our approach achieved success rate comparable to the state-of-the-art approaches. When further combining these features with traditional alignment-related features, the success rate of our approach increased to 92.3%, 82.5% and 78.8% at family, superfamily and fold levels, respectively, which is about 18% higher than the state-of-the-art approach at fold level, 6% higher at superfamily level and 1% higher at family level. An independent assessment on SCOP_TEST dataset showed consistent performance improvement, indicating robustness of our approach. Furthermore, bi-clustering results of the extracted features are compatible with fold hierarchy of proteins, implying that these features are fold-specific. Together, these results suggest that the features extracted from predicted contacts are orthogonal to alignment-related features, and the combination of them could greatly facilitate fold recognition at superfamily/fold levels and template-based prediction of protein structures. Source code of DeepFR is freely available through https://github.com/zhujianwei31415/deepfr, and a web server is available through http://protein.ict.ac.cn/deepfr. zheng@itp.ac.cn or dbu@ict.ac.cn. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  10. Chemistry by Computer.

    ERIC Educational Resources Information Center

    Garmon, Linda

    1981-01-01

    Describes the features of various computer chemistry programs. Utilization of computer graphics, color, digital imaging, and other innovations are discussed in programs including those which aid in the identification of unknowns, predict whether chemical reactions are feasible, and predict the biological activity of xenobiotic compounds. (CS)

  11. SU-E-QI-17: Dependence of 3D/4D PET Quantitative Image Features On Noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliver, J; Budzevich, M; Zhang, G

    2014-06-15

    Purpose: Quantitative imaging is a fast evolving discipline where a large number of features are extracted from images; i.e., radiomics. Some features have been shown to have diagnostic, prognostic and predictive value. However, they are sensitive to acquisition and processing factors; e.g., noise. In this study noise was added to positron emission tomography (PET) images to determine how features were affected by noise. Methods: Three levels of Gaussian noise were added to 8 lung cancer patients PET images acquired in 3D mode (static) and using respiratory tracking (4D); for the latter images from one of 10 phases were used. Amore » total of 62 features: 14 shape, 19 intensity (1stO), 18 GLCM textures (2ndO; from grey level co-occurrence matrices) and 11 RLM textures (2ndO; from run-length matrices) features were extracted from segmented tumors. Dimensions of GLCM were 256×256, calculated using 3D images with a step size of 1 voxel in 13 directions. Grey levels were binned into 256 levels for RLM and features were calculated in all 13 directions. Results: Feature variation generally increased with noise. Shape features were the most stable while RLM were the most unstable. Intensity and GLCM features performed well; the latter being more robust. The most stable 1stO features were compactness, maximum and minimum length, standard deviation, root-mean-squared, I30, V10-V90, and entropy. The most stable 2ndO features were entropy, sum-average, sum-entropy, difference-average, difference-variance, difference-entropy, information-correlation-2, short-run-emphasis, long-run-emphasis, and run-percentage. In general, features computed from images from one of the phases of 4D scans were more stable than from 3D scans. Conclusion: This study shows the need to characterize image features carefully before they are used in research and medical applications. It also shows that the performance of features, and thereby feature selection, may be assessed in part by noise analysis.« less

  12. Functional Principal Component Analysis and Randomized Sparse Clustering Algorithm for Medical Image Analysis

    PubMed Central

    Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao

    2015-01-01

    Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383

  13. a Maximum Entropy Model of the Bearded Capuchin Monkey Habitat Incorporating Topography and Spectral Unmixing Analysis

    NASA Astrophysics Data System (ADS)

    Howard, A. M.; Bernardes, S.; Nibbelink, N.; Biondi, L.; Presotto, A.; Fragaszy, D. M.; Madden, M.

    2012-07-01

    Movement patterns of bearded capuchin monkeys (Cebus (Sapajus) libidinosus) in northeastern Brazil are likely impacted by environmental features such as elevation, vegetation density, or vegetation type. Habitat preferences of these monkeys provide insights regarding the impact of environmental features on species ecology and the degree to which they incorporate these features in movement decisions. In order to evaluate environmental features influencing movement patterns and predict areas suitable for movement, we employed a maximum entropy modelling approach, using observation points along capuchin monkey daily routes as species presence points. We combined these presence points with spatial data on important environmental features from remotely sensed data on land cover and topography. A spectral mixing analysis procedure was used to generate fraction images that represent green vegetation, shade and soil of the study area. A Landsat Thematic Mapper scene of the area of study was geometrically and atmospherically corrected and used as input in a Minimum Noise Fraction (MNF) procedure and a linear spectral unmixing approach was used to generate the fraction images. These fraction images and elevation were the environmental layer inputs for our logistic MaxEnt model of capuchin movement. Our models' predictive power (test AUC) was 0.775. Areas of high elevation (>450 m) showed low probabilities of presence, and percent green vegetation was the greatest overall contributor to model AUC. This work has implications for predicting daily movement patterns of capuchins in our field site, as suitability values from our model may relate to habitat preference and facility of movement.

  14. What's color got to do with it? The influence of color on visual attention in different categories.

    PubMed

    Frey, Hans-Peter; Honey, Christian; König, Peter

    2008-10-23

    Certain locations attract human gaze in natural visual scenes. Are there measurable features, which distinguish these locations from others? While there has been extensive research on luminance-defined features, only few studies have examined the influence of color on overt attention. In this study, we addressed this question by presenting color-calibrated stimuli and analyzing color features that are known to be relevant for the responses of LGN neurons. We recorded eye movements of 15 human subjects freely viewing colored and grayscale images of seven different categories. All images were also analyzed by the saliency map model (L. Itti, C. Koch, & E. Niebur, 1998). We find that human fixation locations differ between colored and grayscale versions of the same image much more than predicted by the saliency map. Examining the influence of various color features on overt attention, we find two extreme categories: while in rainforest images all color features are salient, none is salient in fractals. In all other categories, color features are selectively salient. This shows that the influence of color on overt attention depends on the type of image. Also, it is crucial to analyze neurophysiologically relevant color features for quantifying the influence of color on attention.

  15. Predicting Good Features for Image Geo-Localization Using Per-Bundle VLAD (Open Access)

    DTIC Science & Technology

    2016-02-18

    transient scene elements (pedestrians, cars, billboards) and ubiquitous objects (trees, fences, signage ) can introduce obfuscating cues into the geo...windows, charac- teristic wall patterns, and letters on signage are detected as positive elements, while features from trees, people, car wheels

  16. Real-Time Feature Tracking Using Homography

    NASA Technical Reports Server (NTRS)

    Clouse, Daniel S.; Cheng, Yang; Ansar, Adnan I.; Trotz, David C.; Padgett, Curtis W.

    2010-01-01

    This software finds feature point correspondences in sequences of images. It is designed for feature matching in aerial imagery. Feature matching is a fundamental step in a number of important image processing operations: calibrating the cameras in a camera array, stabilizing images in aerial movies, geo-registration of images, and generating high-fidelity surface maps from aerial movies. The method uses a Shi-Tomasi corner detector and normalized cross-correlation. This process is likely to result in the production of some mismatches. The feature set is cleaned up using the assumption that there is a large planar patch visible in both images. At high altitude, this assumption is often reasonable. A mathematical transformation, called an homography, is developed that allows us to predict the position in image 2 of any point on the plane in image 1. Any feature pair that is inconsistent with the homography is thrown out. The output of the process is a set of feature pairs, and the homography. The algorithms in this innovation are well known, but the new implementation improves the process in several ways. It runs in real-time at 2 Hz on 64-megapixel imagery. The new Shi-Tomasi corner detector tries to produce the requested number of features by automatically adjusting the minimum distance between found features. The homography-finding code now uses an implementation of the RANSAC algorithm that adjusts the number of iterations automatically to achieve a pre-set probability of missing a set of inliers. The new interface allows the caller to pass in a set of predetermined points in one of the images. This allows the ability to track the same set of points through multiple frames.

  17. Support vector machine for breast cancer classification using diffusion-weighted MRI histogram features: Preliminary study.

    PubMed

    Vidić, Igor; Egnell, Liv; Jerome, Neil P; Teruel, Jose R; Sjøbakk, Torill E; Østlie, Agnes; Fjøsne, Hans E; Bathen, Tone F; Goa, Pål Erik

    2018-05-01

    Diffusion-weighted MRI (DWI) is currently one of the fastest developing MRI-based techniques in oncology. Histogram properties from model fitting of DWI are useful features for differentiation of lesions, and classification can potentially be improved by machine learning. To evaluate classification of malignant and benign tumors and breast cancer subtypes using support vector machine (SVM). Prospective. Fifty-one patients with benign (n = 23) and malignant (n = 28) breast tumors (26 ER+, whereof six were HER2+). Patients were imaged with DW-MRI (3T) using twice refocused spin-echo echo-planar imaging with echo time / repetition time (TR/TE) = 9000/86 msec, 90 × 90 matrix size, 2 × 2 mm in-plane resolution, 2.5 mm slice thickness, and 13 b-values. Apparent diffusion coefficient (ADC), relative enhanced diffusivity (RED), and the intravoxel incoherent motion (IVIM) parameters diffusivity (D), pseudo-diffusivity (D*), and perfusion fraction (f) were calculated. The histogram properties (median, mean, standard deviation, skewness, kurtosis) were used as features in SVM (10-fold cross-validation) for differentiation of lesions and subtyping. Accuracies of the SVM classifications were calculated to find the combination of features with highest prediction accuracy. Mann-Whitney tests were performed for univariate comparisons. For benign versus malignant tumors, univariate analysis found 11 histogram properties to be significant differentiators. Using SVM, the highest accuracy (0.96) was achieved from a single feature (mean of RED), or from three feature combinations of IVIM or ADC. Combining features from all models gave perfect classification. No single feature predicted HER2 status of ER + tumors (univariate or SVM), although high accuracy (0.90) was achieved with SVM combining several features. Importantly, these features had to include higher-order statistics (kurtosis and skewness), indicating the importance to account for heterogeneity. Our findings suggest that SVM, using features from a combination of diffusion models, improves prediction accuracy for differentiation of benign versus malignant breast tumors, and may further assist in subtyping of breast cancer. 3 Technical Efficacy: Stage 3 J. Magn. Reson. Imaging 2018;47:1205-1216. © 2017 International Society for Magnetic Resonance in Medicine.

  18. Optical differentiation between malignant and benign lymphadenopathy by grey scale texture analysis of endobronchial ultrasound convex probe images.

    PubMed

    Nguyen, Phan; Bashirzadeh, Farzad; Hundloe, Justin; Salvado, Olivier; Dowson, Nicholas; Ware, Robert; Masters, Ian Brent; Bhatt, Manoj; Kumar, Aravind Ravi; Fielding, David

    2012-03-01

    Morphologic and sonographic features of endobronchial ultrasound (EBUS) convex probe images are helpful in predicting metastatic lymph nodes. Grey scale texture analysis is a well-established methodology that has been applied to ultrasound images in other fields of medicine. The aim of this study was to determine if this methodology could differentiate between benign and malignant lymphadenopathy of EBUS images. Lymph nodes from digital images of EBUS procedures were manually mapped to obtain a region of interest and were analyzed in a prediction set. The regions of interest were analyzed for the following grey scale texture features in MATLAB (version 7.8.0.347 [R2009a]): mean pixel value, difference between maximal and minimal pixel value, SEM pixel value, entropy, correlation, energy, and homogeneity. Significant grey scale texture features were used to assess a validation set compared with fluoro-D-glucose (FDG)-PET-CT scan findings where available. Fifty-two malignant nodes and 48 benign nodes were in the prediction set. Malignant nodes had a greater difference in the maximal and minimal pixel values, SEM pixel value, entropy, and correlation, and a lower energy (P < .0001 for all values). Fifty-one lymph nodes were in the validation set; 44 of 51 (86.3%) were classified correctly. Eighteen of these lymph nodes also had FDG-PET-CT scan assessment, which correctly classified 14 of 18 nodes (77.8%), compared with grey scale texture analysis, which correctly classified 16 of 18 nodes (88.9%). Grey scale texture analysis of EBUS convex probe images can be used to differentiate malignant and benign lymphadenopathy. Preliminary results are comparable to FDG-PET-CT scan.

  19. Machine Learning for Medical Imaging

    PubMed Central

    Korfiatis, Panagiotis; Akkus, Zeynettin; Kline, Timothy L.

    2017-01-01

    Machine learning is a technique for recognizing patterns that can be applied to medical images. Although it is a powerful tool that can help in rendering medical diagnoses, it can be misapplied. Machine learning typically begins with the machine learning algorithm system computing the image features that are believed to be of importance in making the prediction or diagnosis of interest. The machine learning algorithm system then identifies the best combination of these image features for classifying the image or computing some metric for the given image region. There are several methods that can be used, each with different strengths and weaknesses. There are open-source versions of most of these machine learning methods that make them easy to try and apply to images. Several metrics for measuring the performance of an algorithm exist; however, one must be aware of the possible associated pitfalls that can result in misleading metrics. More recently, deep learning has started to be used; this method has the benefit that it does not require image feature identification and calculation as a first step; rather, features are identified as part of the learning process. Machine learning has been used in medical imaging and will have a greater influence in the future. Those working in medical imaging must be aware of how machine learning works. ©RSNA, 2017 PMID:28212054

  20. Machine Learning for Medical Imaging.

    PubMed

    Erickson, Bradley J; Korfiatis, Panagiotis; Akkus, Zeynettin; Kline, Timothy L

    2017-01-01

    Machine learning is a technique for recognizing patterns that can be applied to medical images. Although it is a powerful tool that can help in rendering medical diagnoses, it can be misapplied. Machine learning typically begins with the machine learning algorithm system computing the image features that are believed to be of importance in making the prediction or diagnosis of interest. The machine learning algorithm system then identifies the best combination of these image features for classifying the image or computing some metric for the given image region. There are several methods that can be used, each with different strengths and weaknesses. There are open-source versions of most of these machine learning methods that make them easy to try and apply to images. Several metrics for measuring the performance of an algorithm exist; however, one must be aware of the possible associated pitfalls that can result in misleading metrics. More recently, deep learning has started to be used; this method has the benefit that it does not require image feature identification and calculation as a first step; rather, features are identified as part of the learning process. Machine learning has been used in medical imaging and will have a greater influence in the future. Those working in medical imaging must be aware of how machine learning works. © RSNA, 2017.

  1. Electroencephalogram-based decoding cognitive states using convolutional neural network and likelihood ratio based score fusion.

    PubMed

    Zafar, Raheel; Dass, Sarat C; Malik, Aamir Saeed

    2017-01-01

    Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain-computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method.

  2. Distant failure prediction for early stage NSCLC by analyzing PET with sparse representation

    NASA Astrophysics Data System (ADS)

    Hao, Hongxia; Zhou, Zhiguo; Wang, Jing

    2017-03-01

    Positron emission tomography (PET) imaging has been widely explored for treatment outcome prediction. Radiomicsdriven methods provide a new insight to quantitatively explore underlying information from PET images. However, it is still a challenging problem to automatically extract clinically meaningful features for prognosis. In this work, we develop a PET-guided distant failure predictive model for early stage non-small cell lung cancer (NSCLC) patients after stereotactic ablative radiotherapy (SABR) by using sparse representation. The proposed method does not need precalculated features and can learn intrinsically distinctive features contributing to classification of patients with distant failure. The proposed framework includes two main parts: 1) intra-tumor heterogeneity description; and 2) dictionary pair learning based sparse representation. Tumor heterogeneity is initially captured through anisotropic kernel and represented as a set of concatenated vectors, which forms the sample gallery. Then, given a test tumor image, its identity (i.e., distant failure or not) is classified by applying the dictionary pair learning based sparse representation. We evaluate the proposed approach on 48 NSCLC patients treated by SABR at our institute. Experimental results show that the proposed approach can achieve an area under the characteristic curve (AUC) of 0.70 with a sensitivity of 69.87% and a specificity of 69.51% using a five-fold cross validation.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, J; Gong, G; Cui, Y

    Purpose: To predict early pathological response of breast cancer to neoadjuvant chemotherapy (NAC) based on quantitative, multi-region analysis of dynamic contrast enhancement magnetic resonance imaging (DCE-MRI). Methods: In this institution review board-approved study, 35 patients diagnosed with stage II/III breast cancer were retrospectively investigated using DCE-MR images acquired before and after the first cycle of NAC. First, principal component analysis (PCA) was used to reduce the dimensionality of the DCE-MRI data with a high-temporal resolution. We then partitioned the whole tumor into multiple subregions using k-means clustering based on the PCA-defined eigenmaps. Within each tumor subregion, we extracted four quantitativemore » Haralick texture features based on the gray-level co-occurrence matrix (GLCM). The change in texture features in each tumor subregion between pre- and during-NAC was used to predict pathological complete response after NAC. Results: Three tumor subregions were identified through clustering, each with distinct enhancement characteristics. In univariate analysis, all imaging predictors except one extracted from the tumor subregion associated with fast wash-out were statistically significant (p< 0.05) after correcting for multiple testing, with area under the ROC curve or AUCs between 0.75 and 0.80. In multivariate analysis, the proposed imaging predictors achieved an AUC of 0.79 (p = 0.002) in leave-one-out cross validation. This improved upon conventional imaging predictors such as tumor volume (AUC=0.53) and texture features based on whole-tumor analysis (AUC=0.65). Conclusion: The heterogeneity of the tumor subregion associated with fast wash-out on DCE-MRI predicted early pathological response to neoadjuvant chemotherapy in breast cancer.« less

  4. Applications and limitations of radiomics

    NASA Astrophysics Data System (ADS)

    Yip, Stephen S. F.; Aerts, Hugo J. W. L.

    2016-07-01

    Radiomics is an emerging field in quantitative imaging that uses advanced imaging features to objectively and quantitatively describe tumour phenotypes. Radiomic features have recently drawn considerable interest due to its potential predictive power for treatment outcomes and cancer genetics, which may have important applications in personalized medicine. In this technical review, we describe applications and challenges of the radiomic field. We will review radiomic application areas and technical issues, as well as proper practices for the designs of radiomic studies.

  5. Learning-based prediction of gestational age from ultrasound images of the fetal brain.

    PubMed

    Namburete, Ana I L; Stebbing, Richard V; Kemp, Bryn; Yaqub, Mohammad; Papageorghiou, Aris T; Alison Noble, J

    2015-04-01

    We propose an automated framework for predicting gestational age (GA) and neurodevelopmental maturation of a fetus based on 3D ultrasound (US) brain image appearance. Our method capitalizes on age-related sonographic image patterns in conjunction with clinical measurements to develop, for the first time, a predictive age model which improves on the GA-prediction potential of US images. The framework benefits from a manifold surface representation of the fetal head which delineates the inner skull boundary and serves as a common coordinate system based on cranial position. This allows for fast and efficient sampling of anatomically-corresponding brain regions to achieve like-for-like structural comparison of different developmental stages. We develop bespoke features which capture neurosonographic patterns in 3D images, and using a regression forest classifier, we characterize structural brain development both spatially and temporally to capture the natural variation existing in a healthy population (N=447) over an age range of active brain maturation (18-34weeks). On a routine clinical dataset (N=187) our age prediction results strongly correlate with true GA (r=0.98,accurate within±6.10days), confirming the link between maturational progression and neurosonographic activity observable across gestation. Our model also outperforms current clinical methods by ±4.57 days in the third trimester-a period complicated by biological variations in the fetal population. Through feature selection, the model successfully identified the most age-discriminating anatomies over this age range as being the Sylvian fissure, cingulate, and callosal sulci. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  6. DCT-based iris recognition.

    PubMed

    Monro, Donald M; Rakshit, Soumyadip; Zhang, Dexin

    2007-04-01

    This paper presents a novel iris coding method based on differences of discrete cosine transform (DCT) coefficients of overlapped angular patches from normalized iris images. The feature extraction capabilities of the DCT are optimized on the two largest publicly available iris image data sets, 2,156 images of 308 eyes from the CASIA database and 2,955 images of 150 eyes from the Bath database. On this data, we achieve 100 percent Correct Recognition Rate (CRR) and perfect Receiver-Operating Characteristic (ROC) Curves with no registered false accepts or rejects. Individual feature bit and patch position parameters are optimized for matching through a product-of-sum approach to Hamming distance calculation. For verification, a variable threshold is applied to the distance metric and the False Acceptance Rate (FAR) and False Rejection Rate (FRR) are recorded. A new worst-case metric is proposed for predicting practical system performance in the absence of matching failures, and the worst case theoretical Equal Error Rate (EER) is predicted to be as low as 2.59 x 10(-4) on the available data sets.

  7. Bladder Cancer Treatment Response Assessment in CT using Radiomics with Deep-Learning.

    PubMed

    Cha, Kenny H; Hadjiiski, Lubomir; Chan, Heang-Ping; Weizer, Alon Z; Alva, Ajjai; Cohan, Richard H; Caoili, Elaine M; Paramagul, Chintana; Samala, Ravi K

    2017-08-18

    Cross-sectional X-ray imaging has become the standard for staging most solid organ malignancies. However, for some malignancies such as urinary bladder cancer, the ability to accurately assess local extent of the disease and understand response to systemic chemotherapy is limited with current imaging approaches. In this study, we explored the feasibility that radiomics-based predictive models using pre- and post-treatment computed tomography (CT) images might be able to distinguish between bladder cancers with and without complete chemotherapy responses. We assessed three unique radiomics-based predictive models, each of which employed different fundamental design principles ranging from a pattern recognition method via deep-learning convolution neural network (DL-CNN), to a more deterministic radiomics feature-based approach and then a bridging method between the two, utilizing a system which extracts radiomics features from the image patterns. Our study indicates that the computerized assessment using radiomics information from the pre- and post-treatment CT of bladder cancer patients has the potential to assist in assessment of treatment response.

  8. Toolkits and Libraries for Deep Learning.

    PubMed

    Erickson, Bradley J; Korfiatis, Panagiotis; Akkus, Zeynettin; Kline, Timothy; Philbrick, Kenneth

    2017-08-01

    Deep learning is an important new area of machine learning which encompasses a wide range of neural network architectures designed to complete various tasks. In the medical imaging domain, example tasks include organ segmentation, lesion detection, and tumor classification. The most popular network architecture for deep learning for images is the convolutional neural network (CNN). Whereas traditional machine learning requires determination and calculation of features from which the algorithm learns, deep learning approaches learn the important features as well as the proper weighting of those features to make predictions for new data. In this paper, we will describe some of the libraries and tools that are available to aid in the construction and efficient execution of deep learning as applied to medical images.

  9. Measuring and Predicting Tag Importance for Image Retrieval.

    PubMed

    Li, Shangwen; Purushotham, Sanjay; Chen, Chen; Ren, Yuzhuo; Kuo, C-C Jay

    2017-12-01

    Textual data such as tags, sentence descriptions are combined with visual cues to reduce the semantic gap for image retrieval applications in today's Multimodal Image Retrieval (MIR) systems. However, all tags are treated as equally important in these systems, which may result in misalignment between visual and textual modalities during MIR training. This will further lead to degenerated retrieval performance at query time. To address this issue, we investigate the problem of tag importance prediction, where the goal is to automatically predict the tag importance and use it in image retrieval. To achieve this, we first propose a method to measure the relative importance of object and scene tags from image sentence descriptions. Using this as the ground truth, we present a tag importance prediction model to jointly exploit visual, semantic and context cues. The Structural Support Vector Machine (SSVM) formulation is adopted to ensure efficient training of the prediction model. Then, the Canonical Correlation Analysis (CCA) is employed to learn the relation between the image visual feature and tag importance to obtain robust retrieval performance. Experimental results on three real-world datasets show a significant performance improvement of the proposed MIR with Tag Importance Prediction (MIR/TIP) system over other MIR systems.

  10. Can technical characteristics predict clinical performance in PET/CT imaging? A correlation study for thyroid cancer diagnosis

    NASA Astrophysics Data System (ADS)

    Kallergi, Maria; Menychtas, Dimitrios; Georgakopoulos, Alexandros; Pianou, Nikoletta; Metaxas, Marinos; Chatziioannou, Sofia

    2013-03-01

    The purpose of this study was to determine whether image characteristics could be used to predict the outcome of ROC studies in PET/CT imaging. Patients suspected for recurrent thyroid cancer underwent a standard whole body (WB) examination and an additional high-resolution head-and-neck (HN) F18-FDG PET/CT scan. The value of the latter was determined with an ROC study, the results of which showed that the WB+HN combination was better than WB alone for thyroid cancer detection and diagnosis. Following the ROC experiment, the WB and HN images of confirmed benign or malignant thyroid disease were analyzed and first and second order textural features were determined. Features included minimum, mean, and maximum intensity, as well as contrast in regions of interest encircling the thyroid lesions. Lesion size and standard uptake values (SUV) were also determined. Bivariate analysis was applied to determine relationships between WB and HN features and between observer ROC responses and the various feature values. The two sets showed significant associations in the values of SUV, contrast, and lesion size. They were completely different when the intensities were considered; no relationship was found between the WB minimum, maximum, and mean ROI values and their HN counterparts. SUV and contrast were the strongest predictors of ROC performance on PET/CT examinations of thyroid cancer. The high resolution HN images seem to enhance these relationships but without a single dramatic effect as was projected from the ROC results. A combination of features from both WB and HN datasets may possibly be a more robust predictor of ROC performance.

  11. Measurement of glucose concentration by image processing of thin film slides

    NASA Astrophysics Data System (ADS)

    Piramanayagam, Sankaranaryanan; Saber, Eli; Heavner, David

    2012-02-01

    Measurement of glucose concentration is important for diagnosis and treatment of diabetes mellitus and other medical conditions. This paper describes a novel image-processing based approach for measuring glucose concentration. A fluid drop (patient sample) is placed on a thin film slide. Glucose, present in the sample, reacts with reagents on the slide to produce a color dye. The color intensity of the dye formed varies with glucose at different concentration levels. Current methods use spectrophotometry to determine the glucose level of the sample. Our proposed algorithm uses an image of the slide, captured at a specific wavelength, to automatically determine glucose concentration. The algorithm consists of two phases: training and testing. Training datasets consist of images at different concentration levels. The dye-occupied image region is first segmented using a Hough based technique and then an intensity based feature is calculated from the segmented region. Subsequently, a mathematical model that describes a relationship between the generated feature values and the given concentrations is obtained. During testing, the dye region of a test slide image is segmented followed by feature extraction. These two initial steps are similar to those done in training. However, in the final step, the algorithm uses the model (feature vs. concentration) obtained from the training and feature generated from test image to predict the unknown concentration. The performance of the image-based analysis was compared with that of a standard glucose analyzer.

  12. Analysis of DCE-MRI features in tumor and the surrounding stroma for prediction of Ki-67 proliferation status in breast cancer

    NASA Astrophysics Data System (ADS)

    Li, Hui; Fan, Ming; Zhang, Peng; Li, Yuanzhe; Cheng, Hu; Zhang, Juan; Shao, Guoliang; Li, Lihua

    2018-03-01

    Breast cancer, with its high heterogeneity, is the most common malignancies in women. In addition to the entire tumor itself, tumor microenvironment could also play a fundamental role on the occurrence and development of tumors. The aim of this study is to investigate the role of heterogeneity within a tumor and the surrounding stromal tissue in predicting the Ki-67 proliferation status of oestrogen receptor (ER)-positive breast cancer patients. To this end, we collected 62 patients imaged with preoperative dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) for analysis. The tumor and the peritumoral stromal tissue were segmented into 8 shells with 5 mm width outside of tumor. The mean enhancement rate in the stromal shells showed a decreasing order if their distances to the tumor increase. Statistical and texture features were extracted from the tumor and the surrounding stromal bands, and multivariate logistic regression classifiers were trained and tested based on these features. An area under the receiver operating characteristic curve (AUC) were calculated to evaluate performance of the classifiers. Furthermore, the statistical model using features extracted from boundary shell next to the tumor produced AUC of 0.796+/-0.076, which is better than that using features from the other subregions. Furthermore, the prediction model using 7 features from the entire tumor produced an AUC value of 0.855+/-0.065. The classifier based on 9 selected features extracted from peritumoral stromal region showed an AUC value of 0.870+/-0.050. Finally, after fusion of the predictive model obtained from entire tumor and the peritumoral stromal regions, the classifier performance was significantly improved with AUC of 0.920. The results indicated that heterogeneity in tumor boundary and peritumoral stromal region could be valuable in predicting the indicator associated with prognosis.

  13. A feature alignment score for online cone-beam CT-based image-guided radiotherapy for prostate cancer.

    PubMed

    Hargrave, Catriona; Deegan, Timothy; Poulsen, Michael; Bednarz, Tomasz; Harden, Fiona; Mengersen, Kerrie

    2018-05-17

    To develop a method for scoring online cone-beam CT (CBCT)-to-planning CT image feature alignment to inform prostate image-guided radiotherapy (IGRT) decision-making. The feasibility of incorporating volume variation metric thresholds predictive of delivering planned dose into weighted functions, was investigated. Radiation therapists and radiation oncologists participated in workshops where they reviewed prostate CBCT-IGRT case examples and completed a paper-based survey of image feature matching practices. For 36 prostate cancer patients, one daily CBCT was retrospectively contoured then registered with their plan to simulate delivered dose if (a) no online setup corrections and (b) online image alignment and setup corrections, were performed. Survey results were used to select variables for inclusion in classification and regression tree (CART) and boosted regression trees (BRT) modeling of volume variation metric thresholds predictive of delivering planned dose to the prostate, proximal seminal vesicles (PSV), bladder, and rectum. Weighted functions incorporating the CART and BRT results were used to calculate a score of individual tumor and organ at risk image feature alignment (FAS TV _ OAR ). Scaled and weighted FAS TV _ OAR were then used to calculate a score of overall treatment compliance (FAS global ) for a given CBCT-planning CT registration. The FAS TV _ OAR were assessed for sensitivity, specificity, and predictive power. FAS global thresholds indicative of high, medium, or low overall treatment plan compliance were determined using coefficients from multiple linear regression analysis. Thirty-two participants completed the prostate CBCT-IGRT survey. While responses demonstrated consensus of practice for preferential ranking of planning CT and CBCT match features in the presence of deformation and rotation, variation existed in the specified thresholds for observed volume differences requiring patient repositioning or repeat bladder and bowel preparation. The CART and BRT modeling indicated that for a given registration, a Dice similarity coefficient >0.80 and >0.60 for the prostate and PSV, respectively, and a maximum Hausdorff distance <8.0 mm for both structures were predictive of delivered dose ± 5% of planned dose. A normalized volume difference <1.0 and a CBCT anterior rectum wall >1.0 mm anterior to the planning CT anterior rectum wall were predictive of delivered dose >5% of planned rectum dose. A normalized volume difference <0.88, and a CBCT bladder wall >13.5 mm inferior and >5.0 mm posterior to the planning CT bladder were predictive of delivered dose >5% of planned bladder dose. A FAS TV _ OAR >0 is indicative of delivery of planned dose. For calculated FAS TV _ OAR for the prostate, PSV, bladder, and rectum using test data, sensitivity was 0.56, 0.75, 0.89, and 1.00, respectively; specificity 0.90, 0.94, 0.59, and 1.00, respectively; positive predictive power 0.90, 0.86, 0.53, and 1.00, respectively; and negative predictive power 0.56, 0.89, 0.91, and 1.00, respectively. Thresholds for the calculated FAS global of were low <60, medium 60-80, and high >80, with a 27% misclassification rate for the test data. A FAS global incorporating nested FAS TV _ OAR and volume variation metric thresholds predictive of treatment plan compliance was developed, offering an alternative to pretreatment dose calculations to assess treatment delivery accuracy. © 2018 American Association of Physicists in Medicine.

  14. An efficient approach for site-specific scenery prediction in surveillance imaging near Earth's surface

    NASA Astrophysics Data System (ADS)

    Jylhä, Juha; Marjanen, Kalle; Rantala, Mikko; Metsäpuro, Petri; Visa, Ari

    2006-09-01

    Surveillance camera automation and camera network development are growing areas of interest. This paper proposes a competent approach to enhance the camera surveillance with Geographic Information Systems (GIS) when the camera is located at the height of 10-1000 m. A digital elevation model (DEM), a terrain class model, and a flight obstacle register comprise exploited auxiliary information. The approach takes into account spherical shape of the Earth and realistic terrain slopes. Accordingly, considering also forests, it determines visible and shadow regions. The efficiency arises out of reduced dimensionality in the visibility computation. Image processing is aided by predicting certain advance features of visible terrain. The features include distance from the camera and the terrain or object class such as coniferous forest, field, urban site, lake, or mast. The performance of the approach is studied by comparing a photograph of Finnish forested landscape with the prediction. The predicted background is well-fitting, and potential knowledge-aid for various purposes becomes apparent.

  15. Novel method to predict body weight in children based on age and morphological facial features.

    PubMed

    Huang, Ziyin; Barrett, Jeffrey S; Barrett, Kyle; Barrett, Ryan; Ng, Chee M

    2015-04-01

    A new and novel approach of predicting the body weight of children based on age and morphological facial features using a three-layer feed-forward artificial neural network (ANN) model is reported. The model takes in four parameters, including age-based CDC-inferred median body weight and three facial feature distances measured from digital facial images. In this study, thirty-nine volunteer subjects with age ranging from 6-18 years old and BW ranging from 18.6-96.4 kg were used for model development and validation. The final model has a mean prediction error of 0.48, a mean squared error of 18.43, and a coefficient of correlation of 0.94. The model shows significant improvement in prediction accuracy over several age-based body weight prediction methods. Combining with a facial recognition algorithm that can detect, extract and measure the facial features used in this study, mobile applications that incorporate this body weight prediction method may be developed for clinical investigations where access to scales is limited. © 2014, The American College of Clinical Pharmacology.

  16. Dynamic dual-energy chest radiography: a potential tool for lung tissue motion monitoring and kinetic study

    PubMed Central

    Xu, Tong; Ducote, Justin L.; Wong, Jerry T.; Molloi, Sabee

    2011-01-01

    Dual-energy chest radiography has the potential to provide better diagnosis of lung disease by removing the bone signal from the image. Dynamic dual-energy radiography is now possible with the introduction of digital flat panel detectors. The purpose of this study is to evaluate the feasibility of using dynamic dual-energy chest radiography for functional lung imaging and tumor motion assessment. The dual energy system used in this study can acquire up to 15 frame of dual-energy images per second. A swine animal model was mechanically ventilated and imaged using the dual-energy system. Sequences of soft-tissue images were obtained using dual-energy subtraction. Time subtracted soft-tissue images were shown to be able to provide information on regional ventilation. Motion tracking of a lung anatomic feature (a branch of pulmonary artery) was performed based on an image cross-correlation algorithm. The tracking precision was found to be better than 1 mm. An adaptive correlation model was established between the above tracked motion and an external surrogate signal (temperature within the tracheal tube). This model is used to predict lung feature motion using the continuous surrogate signal and low frame rate dual-energy images (0.1 to 3.0 frames /sec). The average RMS error of the prediction was (1.1 ± 0.3) mm. The dynamic dual-energy was shown to be potentially useful for lung functional imaging such as regional ventilation and kinetic studies. It can also be used for lung tumor motion assessment and prediction during radiation therapy. PMID:21285477

  17. Dynamic dual-energy chest radiography: a potential tool for lung tissue motion monitoring and kinetic study.

    PubMed

    Xu, Tong; Ducote, Justin L; Wong, Jerry T; Molloi, Sabee

    2011-02-21

    Dual-energy chest radiography has the potential to provide better diagnosis of lung disease by removing the bone signal from the image. Dynamic dual-energy radiography is now possible with the introduction of digital flat-panel detectors. The purpose of this study is to evaluate the feasibility of using dynamic dual-energy chest radiography for functional lung imaging and tumor motion assessment. The dual-energy system used in this study can acquire up to 15 frames of dual-energy images per second. A swine animal model was mechanically ventilated and imaged using the dual-energy system. Sequences of soft-tissue images were obtained using dual-energy subtraction. Time subtracted soft-tissue images were shown to be able to provide information on regional ventilation. Motion tracking of a lung anatomic feature (a branch of pulmonary artery) was performed based on an image cross-correlation algorithm. The tracking precision was found to be better than 1 mm. An adaptive correlation model was established between the above tracked motion and an external surrogate signal (temperature within the tracheal tube). This model is used to predict lung feature motion using the continuous surrogate signal and low frame rate dual-energy images (0.1-3.0 frames per second). The average RMS error of the prediction was (1.1 ± 0.3) mm. The dynamic dual energy was shown to be potentially useful for lung functional imaging such as regional ventilation and kinetic studies. It can also be used for lung tumor motion assessment and prediction during radiation therapy.

  18. A quantitative study of shape descriptors from glioblastoma multiforme phenotypes for predicting survival outcome

    PubMed Central

    Desrosiers, Christian; Hassan, Lama; Tanougast, Camel

    2016-01-01

    Objective: Predicting the survival outcome of patients with glioblastoma multiforme (GBM) is of key importance to clinicians for selecting the optimal course of treatment. The goal of this study was to evaluate the usefulness of geometric shape features, extracted from MR images, as a potential non-invasive way to characterize GBM tumours and predict the overall survival times of patients with GBM. Methods: The data of 40 patients with GBM were obtained from the Cancer Genome Atlas and Cancer Imaging Archive. The T1 weighted post-contrast and fluid-attenuated inversion-recovery volumes of patients were co-registered and segmented into delineate regions corresponding to three GBM phenotypes: necrosis, active tumour and oedema/invasion. A set of two-dimensional shape features were then extracted slicewise from each phenotype region and combined over slices to describe the three-dimensional shape of these phenotypes. Thereafter, a Kruskal–Wallis test was employed to identify shape features with significantly different distributions across phenotypes. Moreover, a Kaplan–Meier analysis was performed to find features strongly associated with GBM survival. Finally, a multivariate analysis based on the random forest model was used for predicting the survival group of patients with GBM. Results: Our analysis using the Kruskal–Wallis test showed that all but one shape feature had statistically significant differences across phenotypes, with p-value < 0.05, following Holm–Bonferroni correction, justifying the analysis of GBM tumour shapes on a per-phenotype basis. Furthermore, the survival analysis based on the Kaplan–Meier estimator identified three features derived from necrotic regions (i.e. Eccentricity, Extent and Solidity) that were significantly correlated with overall survival (corrected p-value < 0.05; hazard ratios between 1.68 and 1.87). In the multivariate analysis, features from necrotic regions gave the highest accuracy in predicting the survival group of patients, with a mean area under the receiver-operating characteristic curve (AUC) of 63.85%. Combining the features of all three phenotypes increased the mean AUC to 66.99%, suggesting that shape features from different phenotypes can be used in a synergic manner to predict GBM survival. Conclusion: Results show that shape features, in particular those extracted from necrotic regions, can be used effectively to characterize GBM tumours and predict the overall survival of patients with GBM. Advances in knowledge: Simple volumetric features have been largely used to characterize the different phenotypes of a GBM tumour (i.e. active tumour, oedema and necrosis). This study extends previous work by considering a wide range of shape features, extracted in different phenotypes, for the prediction of survival in patients with GBM. PMID:27781499

  19. Quantitative nuclear histomorphometry predicts oncotype DX risk categories for early stage ER+ breast cancer.

    PubMed

    Whitney, Jon; Corredor, German; Janowczyk, Andrew; Ganesan, Shridar; Doyle, Scott; Tomaszewski, John; Feldman, Michael; Gilmore, Hannah; Madabhushi, Anant

    2018-05-30

    Gene-expression companion diagnostic tests, such as the Oncotype DX test, assess the risk of early stage Estrogen receptor (ER) positive (+) breast cancers, and guide clinicians in the decision of whether or not to use chemotherapy. However, these tests are typically expensive, time consuming, and tissue-destructive. In this paper, we evaluate the ability of computer-extracted nuclear morphology features from routine hematoxylin and eosin (H&E) stained images of 178 early stage ER+ breast cancer patients to predict corresponding risk categories derived using the Oncotype DX test. A total of 216 features corresponding to the nuclear shape and architecture categories from each of the pathologic images were extracted and four feature selection schemes: Ranksum, Principal Component Analysis with Variable Importance on Projection (PCA-VIP), Maximum-Relevance, Minimum Redundancy Mutual Information Difference (MRMR MID), and Maximum-Relevance, Minimum Redundancy - Mutual Information Quotient (MRMR MIQ), were employed to identify the most discriminating features. These features were employed to train 4 machine learning classifiers: Random Forest, Neural Network, Support Vector Machine, and Linear Discriminant Analysis, via 3-fold cross validation. The four sets of risk categories, and the top Area Under the receiver operating characteristic Curve (AUC) machine classifier performances were: 1) Low ODx and Low mBR grade vs. High ODx and High mBR grade (Low-Low vs. High-High) (AUC = 0.83), 2) Low ODx vs. High ODx (AUC = 0.72), 3) Low ODx vs. Intermediate and High ODx (AUC = 0.58), and 4) Low and Intermediate ODx vs. High ODx (AUC = 0.65). Trained models were tested independent validation set of 53 cases which comprised of Low and High ODx risk, and demonstrated per-patient accuracies ranging from 75 to 86%. Our results suggest that computerized image analysis of digitized H&E pathology images of early stage ER+ breast cancer might be able predict the corresponding Oncotype DX risk categories.

  20. Region-Based Prediction for Image Compression in the Cloud.

    PubMed

    Begaint, Jean; Thoreau, Dominique; Guillotel, Philippe; Guillemot, Christine

    2018-04-01

    Thanks to the increasing number of images stored in the cloud, external image similarities can be leveraged to efficiently compress images by exploiting inter-images correlations. In this paper, we propose a novel image prediction scheme for cloud storage. Unlike current state-of-the-art methods, we use a semi-local approach to exploit inter-image correlation. The reference image is first segmented into multiple planar regions determined from matched local features and super-pixels. The geometric and photometric disparities between the matched regions of the reference image and the current image are then compensated. Finally, multiple references are generated from the estimated compensation models and organized in a pseudo-sequence to differentially encode the input image using classical video coding tools. Experimental results demonstrate that the proposed approach yields significant rate-distortion performance improvements compared with the current image inter-coding solutions such as high efficiency video coding.

  1. Classifying Acute Ischemic Stroke Onset Time using Deep Imaging Features

    PubMed Central

    Ho, King Chung; Speier, William; El-Saden, Suzie; Arnold, Corey W.

    2017-01-01

    Models have been developed to predict stroke outcomes (e.g., mortality) in attempt to provide better guidance for stroke treatment. However, there is little work in developing classification models for the problem of unknown time-since-stroke (TSS), which determines a patient’s treatment eligibility based on a clinical defined cutoff time point (i.e., <4.5hrs). In this paper, we construct and compare machine learning methods to classify TSS<4.5hrs using magnetic resonance (MR) imaging features. We also propose a deep learning model to extract hidden representations from the MR perfusion-weighted images and demonstrate classification improvement by incorporating these additional imaging features. Finally, we discuss a strategy to visualize the learned features from the proposed deep learning model. The cross-validation results show that our best classifier achieved an area under the curve of 0.68, which improves significantly over current clinical methods (0.58), demonstrating the potential benefit of using advanced machine learning methods in TSS classification. PMID:29854156

  2. Advances in feature selection methods for hyperspectral image processing in food industry applications: a review.

    PubMed

    Dai, Qiong; Cheng, Jun-Hu; Sun, Da-Wen; Zeng, Xin-An

    2015-01-01

    There is an increased interest in the applications of hyperspectral imaging (HSI) for assessing food quality, safety, and authenticity. HSI provides abundance of spatial and spectral information from foods by combining both spectroscopy and imaging, resulting in hundreds of contiguous wavebands for each spatial position of food samples, also known as the curse of dimensionality. It is desirable to employ feature selection algorithms for decreasing computation burden and increasing predicting accuracy, which are especially relevant in the development of online applications. Recently, a variety of feature selection algorithms have been proposed that can be categorized into three groups based on the searching strategy namely complete search, heuristic search and random search. This review mainly introduced the fundamental of each algorithm, illustrated its applications in hyperspectral data analysis in the food field, and discussed the advantages and disadvantages of these algorithms. It is hoped that this review should provide a guideline for feature selections and data processing in the future development of hyperspectral imaging technique in foods.

  3. Statistical aspects of radiogenomics: can radiogenomics models be used to aid prediction of outcomes in cancer patients?

    NASA Astrophysics Data System (ADS)

    Ren, Boya; Mazurowski, Maciej A.

    2017-03-01

    Radiogenomics is a new direction in cancer research that aims at identifying the relationship between tumor genomics and its appearance in imaging (i.e. its radiophenotype). Recent years brought multiple radiogenomic discoveries in brain, breast, lung, and other cancers. With development of this new field we believe that it important to investigate in which setting radiogenomics could be useful to better direct research effort. One of the general applications of radiogenomics is to generate imaging-based models for prediction of outcomes and doing so through modeling the relationship between imaging and genomics and the relationship between genomics and outcomes. We believe that this is an important potential application of radiogenomic as it could advance imaging-based precision medicine. We show a preliminary simulation study evaluation whether such approach results in improved models. We investigate different setting in terms of the strengths of the radiogenomic relationship, prognostic power of the imaging and genomic descriptors, and availability and quality of data. Our experiments indicated that the following parameters have impact on usefulness of the radiogenomic approach: predictive power of genomic features and imaging features, strength of the radiogenomic relationship as well as number and follow up time for the genomic data. Overall, we found that there are some situations in which radiogenomics approach is beneficial but only when the radiogenomic relationship is strong and low number of imaging cases with outcomes data are available.

  4. Could texture features from preoperative CT image be used for predicting occult peritoneal carcinomatosis in patients with advanced gastric cancer?

    PubMed

    Kim, Hae Young; Kim, Young Hoon; Yun, Gabin; Chang, Won; Lee, Yoon Jin; Kim, Bohyoung

    2018-01-01

    To retrospectively investigate whether texture features obtained from preoperative CT images of advanced gastric cancer (AGC) patients could be used for the prediction of occult peritoneal carcinomatosis (PC) detected during operation. 51 AGC patients with occult PC detected during operation from January 2009 to December 2012 were included as occult PC group. For the control group, other 51 AGC patients without evidence of distant metastasis including PC, and whose clinical T and N stage could be matched to those of the patients of the occult PC group, were selected from the period of January 2011 to July 2012. Each group was divided into test (n = 41) and validation cohort (n = 10). Demographic and clinical data of these patients were acquired from the hospital database. Texture features including average, standard deviation, kurtosis, skewness, entropy, correlation, and contrast were obtained from manually drawn region of interest (ROI) over the omentum on the axial CT image showing the omentum at its largest cross sectional area. After using Fisher's exact and Wilcoxon signed-rank test for comparison of the clinical and texture features between the two groups of the test cohort, conditional logistic regression analysis was performed to determine significant independent predictor for occult PC. Using the optimal cut-off value from receiver operating characteristic (ROC) analysis for the significant variables, diagnostic sensitivity and specificity were determined in the test cohort. The cut-off value of the significant variables obtained from the test cohort was then applied to the validation cohort. Bonferroni correction was used to adjust P value for multiple comparisons. Between the two groups, there was no significant difference in the clinical features. Regarding the texture features, the occult PC group showed significantly higher average, entropy, standard deviation, and significantly lower correlation (P value < 0.004 for all). Conditional logistic regression analysis demonstrated that entropy was significant independent predictor for occult PC. When the cut-off value of entropy (> 7.141) was applied to the validation cohort, sensitivity and specificity for the prediction of occult PC were 80% and 90%, respectively. For AGC patients whose PC cannot be detected with routine imaging such as CT, texture analysis may be a useful adjunct for the prediction of occult PC.

  5. New features in Saturn's atmosphere revealed by high-resolution thermal infrared images

    NASA Technical Reports Server (NTRS)

    Gezari, D. Y.; Mumma, M. J.; Espenak, F.; Deming, D.; Bjoraker, G.; Woods, L.; Folz, W.

    1989-01-01

    Observations of the stratospheric IR emission structure on Saturn are presented. The high-spatial-resolution global images show a variety of new features, including a narrow equatorial belt of enhanced emission at 7.8 micron, a prominent symmetrical north polar hotspot at all three wavelengths, and a midlatitude structure which is asymmetrically brightened at the east limb. The results confirm the polar brightening and reversal in position predicted by recent models for seasonal thermal variations of Saturn's stratosphere.

  6. Multispectral Image Processing for Plants

    NASA Technical Reports Server (NTRS)

    Miles, Gaines E.

    1991-01-01

    The development of a machine vision system to monitor plant growth and health is one of three essential steps towards establishing an intelligent system capable of accurately assessing the state of a controlled ecological life support system for long-term space travel. Besides a network of sensors, simulators are needed to predict plant features, and artificial intelligence algorithms are needed to determine the state of a plant based life support system. Multispectral machine vision and image processing can be used to sense plant features, including health and nutritional status.

  7. Evaluation of Imaging Methods in Tick-Borne Encephalitis.

    PubMed

    Zawadzki, Radosław; Garkowski, Adam; Kubas, Bożena; Zajkowska, Joanna; Hładuński, Marcin; Jurgilewicz, Dorota; Łebkowska, Urszula

    2017-01-01

    Tick-borne encephalitis (TBE) is caused by a virus that belongs to the Flaviviridae family and is transmitted by tick bites. The disease has a biphasic course. Diagnosis is based on laboratory examinations because of non-specific clinical features, which usually entails the detection of specific IgM antibodies in either blood or cerebrospinal fluid that appear in the second phase of the disease. Neurological symptoms, time course of the disease, and imaging findings are multifaceted. During the second phase of the disease, after the onset of neurological symptoms, magnetic resonance imaging (MRI) abnormalities are observed in a limited number of cases. However, imaging features may aid in predicting the prognosis of the disease.

  8. Feature selection and classification of multiparametric medical images using bagging and SVM

    NASA Astrophysics Data System (ADS)

    Fan, Yong; Resnick, Susan M.; Davatzikos, Christos

    2008-03-01

    This paper presents a framework for brain classification based on multi-parametric medical images. This method takes advantage of multi-parametric imaging to provide a set of discriminative features for classifier construction by using a regional feature extraction method which takes into account joint correlations among different image parameters; in the experiments herein, MRI and PET images of the brain are used. Support vector machine classifiers are then trained based on the most discriminative features selected from the feature set. To facilitate robust classification and optimal selection of parameters involved in classification, in view of the well-known "curse of dimensionality", base classifiers are constructed in a bagging (bootstrap aggregating) framework for building an ensemble classifier and the classification parameters of these base classifiers are optimized by means of maximizing the area under the ROC (receiver operating characteristic) curve estimated from their prediction performance on left-out samples of bootstrap sampling. This classification system is tested on a sex classification problem, where it yields over 90% classification rates for unseen subjects. The proposed classification method is also compared with other commonly used classification algorithms, with favorable results. These results illustrate that the methods built upon information jointly extracted from multi-parametric images have the potential to perform individual classification with high sensitivity and specificity.

  9. Prediction of Occult Invasive Disease in Ductal Carcinoma in Situ Using Deep Learning Features.

    PubMed

    Shi, Bibo; Grimm, Lars J; Mazurowski, Maciej A; Baker, Jay A; Marks, Jeffrey R; King, Lorraine M; Maley, Carlo C; Hwang, E Shelley; Lo, Joseph Y

    2018-03-01

    The aim of this study was to determine whether deep features extracted from digital mammograms using a pretrained deep convolutional neural network are prognostic of occult invasive disease for patients with ductal carcinoma in situ (DCIS) on core needle biopsy. In this retrospective study, digital mammographic magnification views were collected for 99 subjects with DCIS at biopsy, 25 of which were subsequently upstaged to invasive cancer. A deep convolutional neural network model that was pretrained on nonmedical images (eg, animals, plants, instruments) was used as the feature extractor. Through a statistical pooling strategy, deep features were extracted at different levels of convolutional layers from the lesion areas, without sacrificing the original resolution or distorting the underlying topology. A multivariate classifier was then trained to predict which tumors contain occult invasive disease. This was compared with the performance of traditional "handcrafted" computer vision (CV) features previously developed specifically to assess mammographic calcifications. The generalization performance was assessed using Monte Carlo cross-validation and receiver operating characteristic curve analysis. Deep features were able to distinguish DCIS with occult invasion from pure DCIS, with an area under the receiver operating characteristic curve of 0.70 (95% confidence interval, 0.68-0.73). This performance was comparable with the handcrafted CV features (area under the curve = 0.68; 95% confidence interval, 0.66-0.71) that were designed with prior domain knowledge. Despite being pretrained on only nonmedical images, the deep features extracted from digital mammograms demonstrated comparable performance with handcrafted CV features for the challenging task of predicting DCIS upstaging. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  10. Brain properties predict proximity to symptom onset in sporadic Alzheimer’s disease

    PubMed Central

    Vogel, Jacob W; Vachon-Presseau, Etienne; Pichet Binette, Alexa; Tam, Angela; Orban, Pierre; La Joie, Renaud; Savard, Mélissa; Picard, Cynthia; Poirier, Judes; Bellec, Pierre; Breitner, John C S; Villeneuve, Sylvia

    2018-01-01

    Abstract See Tijms and Visser (doi:10.1093/brain/awy113) for a scientific commentary on this article. Alzheimer’s disease is preceded by a lengthy ‘preclinical’ stage spanning many years, during which subtle brain changes occur in the absence of overt cognitive symptoms. Predicting when the onset of disease symptoms will occur is an unsolved challenge in individuals with sporadic Alzheimer’s disease. In individuals with autosomal dominant genetic Alzheimer’s disease, the age of symptom onset is similar across generations, allowing the prediction of individual onset times with some accuracy. We extend this concept to persons with a parental history of sporadic Alzheimer’s disease to test whether an individual’s symptom onset age can be informed by the onset age of their affected parent, and whether this estimated onset age can be predicted using only MRI. Structural and functional MRIs were acquired from 255 ageing cognitively healthy subjects with a parental history of sporadic Alzheimer’s disease from the PREVENT-AD cohort. Years to estimated symptom onset was calculated as participant age minus age of parental symptom onset. Grey matter volume was extracted from T1-weighted images and whole-brain resting state functional connectivity was evaluated using degree count. Both modalities were summarized using a 444-region cortical-subcortical atlas. The entire sample was divided into training (n = 138) and testing (n = 68) sets. Within the training set, individuals closer to or beyond their parent’s symptom onset demonstrated reduced grey matter volume and altered functional connectivity, specifically in regions known to be vulnerable in Alzheimer’s disease. Machine learning was used to identify a weighted set of imaging features trained to predict years to estimated symptom onset. This feature set alone significantly predicted years to estimated symptom onset in the unseen testing data. This model, using only neuroimaging features, significantly outperformed a similar model instead trained with cognitive, genetic, imaging and demographic features used in a traditional clinical setting. We next tested if these brain properties could be generalized to predict time to clinical progression in a subgroup of 26 individuals from the Alzheimer’s Disease Neuroimaging Initiative, who eventually converted either to mild cognitive impairment or to Alzheimer’s dementia. The feature set trained on years to estimated symptom onset in the PREVENT-AD predicted variance in time to clinical conversion in this separate longitudinal dataset. Adjusting for participant age did not impact any of the results. These findings demonstrate that years to estimated symptom onset or similar measures can be predicted from brain features and may help estimate presymptomatic disease progression in at-risk individuals. PMID:29688388

  11. Predication of different stages of Alzheimer's disease using neighborhood component analysis and ensemble decision tree.

    PubMed

    Jin, Mingwu; Deng, Weishu

    2018-05-15

    There is a spectrum of the progression from healthy control (HC) to mild cognitive impairment (MCI) without conversion to Alzheimer's disease (AD), to MCI with conversion to AD (cMCI), and to AD. This study aims to predict the different disease stages using brain structural information provided by magnetic resonance imaging (MRI) data. The neighborhood component analysis (NCA) is applied to select most powerful features for prediction. The ensemble decision tree classifier is built to predict which group the subject belongs to. The best features and model parameters are determined by cross validation of the training data. Our results show that 16 out of a total of 429 features were selected by NCA using 240 training subjects, including MMSE score and structural measures in memory-related regions. The boosting tree model with NCA features can achieve prediction accuracy of 56.25% on 160 test subjects. Principal component analysis (PCA) and sequential feature selection (SFS) are used for feature selection, while support vector machine (SVM) is used for classification. The boosting tree model with NCA features outperforms all other combinations of feature selection and classification methods. The results suggest that NCA be a better feature selection strategy than PCA and SFS for the data used in this study. Ensemble tree classifier with boosting is more powerful than SVM to predict the subject group. However, more advanced feature selection and classification methods or additional measures besides structural MRI may be needed to improve the prediction performance. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. PREVAIL: Predicting Recovery through Estimation and Visualization of Active and Incident Lesions.

    PubMed

    Dworkin, Jordan D; Sweeney, Elizabeth M; Schindler, Matthew K; Chahin, Salim; Reich, Daniel S; Shinohara, Russell T

    2016-01-01

    The goal of this study was to develop a model that integrates imaging and clinical information observed at lesion incidence for predicting the recovery of white matter lesions in multiple sclerosis (MS) patients. Demographic, clinical, and magnetic resonance imaging (MRI) data were obtained from 60 subjects with MS as part of a natural history study at the National Institute of Neurological Disorders and Stroke. A total of 401 lesions met the inclusion criteria and were used in the study. Imaging features were extracted from the intensity-normalized T1-weighted (T1w) and T2-weighted sequences as well as magnetization transfer ratio (MTR) sequence acquired at lesion incidence. T1w and MTR signatures were also extracted from images acquired one-year post-incidence. Imaging features were integrated with clinical and demographic data observed at lesion incidence to create statistical prediction models for long-term damage within the lesion. The performance of the T1w and MTR predictions was assessed in two ways: first, the predictive accuracy was measured quantitatively using leave-one-lesion-out cross-validated (CV) mean-squared predictive error. Then, to assess the prediction performance from the perspective of expert clinicians, three board-certified MS clinicians were asked to individually score how similar the CV model-predicted one-year appearance was to the true one-year appearance for a random sample of 100 lesions. The cross-validated root-mean-square predictive error was 0.95 for normalized T1w and 0.064 for MTR, compared to the estimated measurement errors of 0.48 and 0.078 respectively. The three expert raters agreed that T1w and MTR predictions closely resembled the true one-year follow-up appearance of the lesions in both degree and pattern of recovery within lesions. This study demonstrates that by using only information from a single visit at incidence, we can predict how a new lesion will recover using relatively simple statistical techniques. The potential to visualize the likely course of recovery has implications for clinical decision-making, as well as trial enrichment.

  13. The Value of 5-Aminolevulinic Acid in Low-grade Gliomas and High-grade Gliomas Lacking Glioblastoma Imaging Features: An Analysis Based on Fluorescence, Magnetic Resonance Imaging, 18F-Fluoroethyl Tyrosine Positron Emission Tomography, and Tumor Molecular Factors

    PubMed Central

    Jaber, Mohammed; Wölfer, Johannes; Ewelt, Christian; Holling, Markus; Hasselblatt, Martin; Niederstadt, Thomas; Zoubi, Tarek; Weckesser, Matthias

    2015-01-01

    BACKGROUND: Approximately 20% of grade II and most grade III gliomas fluoresce after 5-aminolevulinic acid (5-ALA) application. Conversely, approximately 30% of nonenhancing gliomas are actually high grade. OBJECTIVE: The aim of this study was to identify preoperative factors (ie, age, enhancement, 18F-fluoroethyl tyrosine positron emission tomography [18F-FET PET] uptake ratios) for predicting fluorescence in gliomas without typical glioblastomas imaging features and to determine whether fluorescence will allow prediction of tumor grade or molecular characteristics. METHODS: Patients harboring gliomas without typical glioblastoma imaging features were given 5-ALA. Fluorescence was recorded intraoperatively, and biopsy specimens collected from fluorescing tissue. World Health Organization (WHO) grade, Ki-67/MIB-1 index, IDH1 (R132H) mutation status, O6-methylguanine DNA methyltransferase (MGMT) promoter methylation status, and 1p/19q co-deletion status were assessed. Predictive factors for fluorescence were derived from preoperative magnetic resonance imaging and 18F-FET PET. Classification and regression tree analysis and receiver-operating-characteristic curves were generated for defining predictors. RESULTS: Of 166 tumors, 82 were diagnosed as WHO grade II, 76 as grade III, and 8 as glioblastomas grade IV. Contrast enhancement, tumor volume, and 18F-FET PET uptake ratio >1.85 predicted fluorescence. Fluorescence correlated with WHO grade (P < .001) and Ki-67/MIB-1 index (P < .001), but not with MGMT promoter methylation status, IDH1 mutation status, or 1p19q co-deletion status. The Ki-67/MIB-1 index in fluorescing grade III gliomas was higher than in nonfluorescing tumors, whereas in fluorescing and nonfluorescing grade II tumors, no differences were noted. CONCLUSION: Age, tumor volume, and 18F-FET PET uptake are factors predicting 5-ALA-induced fluorescence in gliomas without typical glioblastoma imaging features. Fluorescence was associated with an increased Ki-67/MIB-1 index and high-grade pathology. Whether fluorescence in grade II gliomas identifies a subtype with worse prognosis remains to be determined. ABBREVIATIONS: 5-ALA, 5-aminolevulinic acid CRT, classification and regression tree 18F-FET PET, 18F-fluoroethyl tyrosine positron emission tomography FLAIR, fluid-attenuated inversion recovery GBM, glioblastoma multiforme O6-MGMT, methylguanine DNA methyltransferase ROC, receiver-operating characteristic SUV, standardized uptake value WHO, World Health Organization PMID:26366972

  14. Utility of Intermediate-Delay Washout CT Images for Differentiation of Malignant and Benign Adrenal Lesions: A Multivariate Analysis.

    PubMed

    Ng, Chaan S; Altinmakas, Emre; Wei, Wei; Ghosh, Payel; Li, Xiao; Grubbs, Elizabeth G; Perrier, Nancy D; Lee, Jeffrey E; Prieto, Victor G; Hobbs, Brian P

    2018-06-27

    The objective of this study was to identify features that impact the diagnostic performance of intermediate-delay washout CT for distinguishing malignant from benign adrenal lesions. This retrospective study evaluated 127 pathologically proven adrenal lesions (82 malignant, 45 benign) in 126 patients who had undergone portal venous phase and intermediate-delay washout CT (1-3 minutes after portal venous phase) with or without unenhanced images. Unenhanced images were available for 103 lesions. Quantitatively, lesion CT attenuation on unenhanced (UA) and delayed (DL) images, absolute and relative percentage of enhancement washout (APEW and RPEW, respectively), descriptive CT features (lesion size, margin characteristics, heterogeneity or homogeneity, fat, calcification), patient demographics, and medical history were evaluated for association with lesion status using multiple logistic regression with stepwise model selection. Area under the ROC curve (A z ) was calculated from both univariate and multivariate analyses. The predictive diagnostic performance of multivariate evaluations was ascertained through cross-validation. A z for DL, APEW, RPEW, and UA was 0.751, 0.795, 0.829, and 0.839, respectively. Multivariate analyses yielded the following significant CT quantitative features and associated A z when combined: RPEW and DL (A z = 0.861) when unenhanced images were not available and APEW and UA (A z = 0.889) when unenhanced images were available. Patient demographics and presence of a prior malignancy were additional significant factors, increasing A z to 0.903 and 0.927, respectively. The combined predictive classifier, without and with UA available, yielded 85.7% and 87.3% accuracies with cross-validation, respectively. When appropriately combined with other CT features, washout derived from intermediate-delay CT with or without additional clinical data has potential utility in differentiating malignant from benign adrenal lesions.

  15. Performance comparison of quantitative semantic features and lung-RADS in the National Lung Screening Trial

    NASA Astrophysics Data System (ADS)

    Li, Qian; Balagurunathan, Yoganand; Liu, Ying; Schabath, Matthew; Gillies, Robert J.

    2016-03-01

    Background: Lung-RADS is the new oncology classification guideline proposed by American College of Radiology (ACR), which provides recommendation for further follow up in lung cancer screening. However, only two features (solidity and size) are included in this system. We hypothesize that additional sematic features can be used to better characterize lung nodules and diagnose cancer. Objective: We propose to develop and characterize a systematic methodology based on semantic image traits to more accurately predict occurrence of cancerous nodules. Methods: 24 radiological image traits were systematically scored on a point scale (up to 5) by a trained radiologist, and lung-RADS was independently scored. A linear discriminant model was used on the semantic features to access their performance in predicting cancer status. The semantic predictors were then compared to lung-RADS classification in 199 patients (60 cancers, 139 normal controls) obtained from the National Lung Screening Trial. Result: There were different combinations of semantic features that were strong predictors of cancer status. Of these, contour, border definition, size, solidity, focal emphysema, focal fibrosis and location emerged as top candidates. The performance of two semantic features (short axial diameter and contour) had an AUC of 0.945, and was comparable to that of lung-RADS (AUC: 0.871). Conclusion: We propose that a semantics-based discrimination approach may act as a complement to the lung-RADS to predict cancer status.

  16. TU-D-207B-02: Delta-Radiomics: The Prognostic Value of Therapy-Induced Changes in Radiomics Features for Stage III Non-Small Cell Lung Cancer Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fave, X; Court, L; UT Health Science Center, Graduate School of Biomedical Sciences, Houston, TX

    Purpose: To determine how radiomics features change during radiation therapy and whether those changes (delta-radiomics features) can improve prognostic models built with clinical factors. Methods: 62 radiomics features, including histogram, co-occurrence, run-length, gray-tone difference, and shape features, were calculated from pretreatment and weekly intra-treatment CTs for 107 stage III NSCLC patients (5–9 images per patient). Image preprocessing for each feature was determined using the set of pretreatment images: bit-depth resample and/or a smoothing filter were tested for their impact on volume-correlation and significance of each feature in univariate cox regression models to maximize their information content. Next, the optimized featuresmore » were calculated from the intratreatment images and tested in linear mixed-effects models to determine which features changed significantly with dose-fraction. The slopes in these significant features were defined as delta-radiomics features. To test their prognostic potential multivariate cox regression models were fitted, first using only clinical features and then clinical+delta-radiomics features for overall-survival, local-recurrence, and distant-metastases. Leave-one-out cross validation was used for model-fitting and patient predictions. Concordance indices(c-index) and p-values for the log-rank test with patients stratified at the median were calculated. Results: Approximately one-half of the 62 optimized features required no preprocessing, one-fourth required smoothing, and one-fourth required smoothing and resampling. From these, 54 changed significantly during treatment. For overall-survival, the c-index improved from 0.52 for clinical factors alone to 0.62 for clinical+delta-radiomics features. For distant-metastases, the c-index improved from 0.53 to 0.58, while for local-recurrence it did not improve. Patient stratification significantly improved (p-value<0.05) for overallsurvival and distant-metastases when delta-radiomics features were included. The delta-radiomics versions of autocorrelation, kurtosis, and compactness were selected most frequently in leave-one-out iterations. Conclusion: Weekly changes in radiomics features can potentially be used to evaluate treatment response and predict patient outcomes. High-risk patients could be recommended for dose escalation or consolidation chemotherapy. This project was funded in part by grants from the National Cancer Institute (NCI) and the Cancer Prevention Research Institute of Texas (CPRIT).« less

  17. TU-CD-BRB-09: Prediction of Chemo-Radiation Outcome for Rectal Cancer Based On Radiomics of Tumor Clinical Characteristics and Multi-Parametric MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nie, K; Yue, N; Shi, L

    2015-06-15

    Purpose: To evaluate the tumor clinical characteristics and quantitative multi-parametric MR imaging features for prediction of response to chemo-radiation treatment (CRT) in locally advanced rectal cancer (LARC). Methods: Forty-three consecutive patients (59.7±6.9 years, from 09/2013 – 06/2014) receiving neoadjuvant CRT followed by surgery were enrolled. All underwent MRI including anatomical T1/T2, Dynamic Contrast Enhanced (DCE)-MRI and Diffusion-Weighted MRI (DWI) prior to the treatment. A total of 151 quantitative features, including morphology/Gray Level Co-occurrence Matrix (GLCM) texture from T1/T2, enhancement kinetics and the voxelized distribution from DCE-MRI, apparent diffusion coefficient (ADC) from DWI, along with clinical information (carcinoembryonic antigen CEA level,more » TNM staging etc.), were extracted for each patient. Response groups were separated based on down-staging, good response and pathological complete response (pCR) status. Logistic regression analysis (LRA) was used to select the best predictors to classify different groups and the predictive performance were calculated using receiver operating characteristic (ROC) analysis. Results: Individual imaging category or clinical charateristics might yield certain level of power in assessing the response. However, the combined model outperformed than any category alone in prediction. With selected features as Volume, GLCM AutoCorrelation (T2), MaxEnhancementProbability (DCE-MRI), and MeanADC (DWI), the down-staging prediciton accuracy (area under the ROC curve, AUC) could be 0.95, better than individual tumor metrics with AUC from 0.53–0.85. While for the pCR prediction, the best set included CEA (clinical charateristics), Homogeneity (DCE-MRI) and MeanADC (DWI) with an AUC of 0.89, more favorable compared to conventional tumor metrics with an AUC ranging from 0.511–0.79. Conclusion: Through a systematic analysis of multi-parametric MR imaging features, we are able to build models with improved predictive value over conventional imaging or clinical metrics. This is encouraging, suggesting the wealth of imaging radiomics should be further explored to help tailor the treatment into the era of personalized medicine. This work is supported by the National Science Foundation of China (NSFC Grant No. 81201091), National High Technology Research and Development Program of China (863 program, Grant No. 2015AA020917), and Fund Project for Excellent Abroad Scholar Personnel in Science and Technology.« less

  18. Deep machine learning provides state-of-the-art performance in image-based plant phenotyping.

    PubMed

    Pound, Michael P; Atkinson, Jonathan A; Townsend, Alexandra J; Wilson, Michael H; Griffiths, Marcus; Jackson, Aaron S; Bulat, Adrian; Tzimiropoulos, Georgios; Wells, Darren M; Murchie, Erik H; Pridmore, Tony P; French, Andrew P

    2017-10-01

    In plant phenotyping, it has become important to be able to measure many features on large image sets in order to aid genetic discovery. The size of the datasets, now often captured robotically, often precludes manual inspection, hence the motivation for finding a fully automated approach. Deep learning is an emerging field that promises unparalleled results on many data analysis problems. Building on artificial neural networks, deep approaches have many more hidden layers in the network, and hence have greater discriminative and predictive power. We demonstrate the use of such approaches as part of a plant phenotyping pipeline. We show the success offered by such techniques when applied to the challenging problem of image-based plant phenotyping and demonstrate state-of-the-art results (>97% accuracy) for root and shoot feature identification and localization. We use fully automated trait identification using deep learning to identify quantitative trait loci in root architecture datasets. The majority (12 out of 14) of manually identified quantitative trait loci were also discovered using our automated approach based on deep learning detection to locate plant features. We have shown deep learning-based phenotyping to have very good detection and localization accuracy in validation and testing image sets. We have shown that such features can be used to derive meaningful biological traits, which in turn can be used in quantitative trait loci discovery pipelines. This process can be completely automated. We predict a paradigm shift in image-based phenotyping bought about by such deep learning approaches, given sufficient training sets. © The Authors 2017. Published by Oxford University Press.

  19. Multiresolution texture models for brain tumor segmentation in MRI.

    PubMed

    Iftekharuddin, Khan M; Ahmed, Shaheen; Hossen, Jakir

    2011-01-01

    In this study we discuss different types of texture features such as Fractal Dimension (FD) and Multifractional Brownian Motion (mBm) for estimating random structures and varying appearance of brain tissues and tumors in magnetic resonance images (MRI). We use different selection techniques including KullBack - Leibler Divergence (KLD) for ranking different texture and intensity features. We then exploit graph cut, self organizing maps (SOM) and expectation maximization (EM) techniques to fuse selected features for brain tumors segmentation in multimodality T1, T2, and FLAIR MRI. We use different similarity metrics to evaluate quality and robustness of these selected features for tumor segmentation in MRI for real pediatric patients. We also demonstrate a non-patient-specific automated tumor prediction scheme by using improved AdaBoost classification based on these image features.

  20. Enhanced Image-Aided Navigation Algorithm with Automatic Calibration and Affine Distortion Prediction

    DTIC Science & Technology

    2012-03-01

    Lowe, David G. “Distinctive Image Features from Scale-Invariant Keypoints”. International Journal of Computer Vision, 2004. 13. Maybeck, Peter S...Fairfax Drive - 3rd Floor Arlington,VA 22203 Dr. Stefanie Tompkins ; (703)248–1540; Stefanie.Tompkins@darpa.mil DARPA Distribution A. Approved for Public

  1. Predicting tumor hypoxia in non-small cell lung cancer by combining CT, FDG PET and dynamic contrast-enhanced CT.

    PubMed

    Even, Aniek J G; Reymen, Bart; La Fontaine, Matthew D; Das, Marco; Jochems, Arthur; Mottaghy, Felix M; Belderbos, José S A; De Ruysscher, Dirk; Lambin, Philippe; van Elmpt, Wouter

    2017-11-01

    Most solid tumors contain inadequately oxygenated (i.e., hypoxic) regions, which tend to be more aggressive and treatment resistant. Hypoxia PET allows visualization of hypoxia and may enable treatment adaptation. However, hypoxia PET imaging is expensive, time-consuming and not widely available. We aimed to predict hypoxia levels in non-small cell lung cancer (NSCLC) using more easily available imaging modalities: FDG-PET/CT and dynamic contrast-enhanced CT (DCE-CT). For 34 NSCLC patients, included in two clinical trials, hypoxia HX4-PET/CT, planning FDG-PET/CT and DCE-CT scans were acquired before radiotherapy. Scans were non-rigidly registered to the planning CT. Tumor blood flow (BF) and blood volume (BV) were calculated by kinetic analysis of DCE-CT images. Within the gross tumor volume, independent clusters, i.e., supervoxels, were created based on FDG-PET/CT. For each supervoxel, tumor-to-background ratios (TBR) were calculated (median SUV/aorta SUV mean ) for HX4-PET/CT and supervoxel features (median, SD, entropy) for the other modalities. Two random forest models (cross-validated: 10 folds, five repeats) were trained to predict the hypoxia TBR; one based on CT, FDG, BF and BV, and one with only CT and FDG features. Patients were split in a training (trial NCT01024829) and independent test set (trial NCT01210378). For each patient, predicted, and observed hypoxic volumes (HV) (TBR > 1.2) were compared. Fifteen patients (3291 supervoxels) were used for training and 19 patients (1502 supervoxels) for testing. The model with all features (RMSE training: 0.19 ± 0.01, test: 0.27) outperformed the model with only CT and FDG-PET features (RMSE training: 0.20 ± 0.01, test: 0.29). All tumors of the test set were correctly classified as normoxic or hypoxic (HV > 1 cm 3 ) by the best performing model. We created a data-driven methodology to predict hypoxia levels and hypoxia spatial patterns using CT, FDG-PET and DCE-CT features in NSCLC. The model correctly classifies all tumors, and could therefore, aid tumor hypoxia classification and patient stratification.

  2. Forensic comparison and matching of fingerprints: using quantitative image measures for estimating error rates through understanding and predicting difficulty.

    PubMed

    Kellman, Philip J; Mnookin, Jennifer L; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E

    2014-01-01

    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and subjective assessment of difficulty in fingerprint comparisons.

  3. Biomechanical Model for Computing Deformations for Whole-Body Image Registration: A Meshless Approach

    PubMed Central

    Li, Mao; Miller, Karol; Joldes, Grand Roman; Kikinis, Ron; Wittek, Adam

    2016-01-01

    Patient-specific biomechanical models have been advocated as a tool for predicting deformations of soft body organs/tissue for medical image registration (aligning two sets of images) when differences between the images are large. However, complex and irregular geometry of the body organs makes generation of patient-specific biomechanical models very time consuming. Meshless discretisation has been proposed to solve this challenge. However, applications so far have been limited to 2-D models and computing single organ deformations. In this study, 3-D comprehensive patient-specific non-linear biomechanical models implemented using Meshless Total Lagrangian Explicit Dynamics (MTLED) algorithms are applied to predict a 3-D deformation field for whole-body image registration. Unlike a conventional approach which requires dividing (segmenting) the image into non-overlapping constituents representing different organs/tissues, the mechanical properties are assigned using the Fuzzy C-Means (FCM) algorithm without the image segmentation. Verification indicates that the deformations predicted using the proposed meshless approach are for practical purposes the same as those obtained using the previously validated finite element models. To quantitatively evaluate the accuracy of the predicted deformations, we determined the spatial misalignment between the registered (i.e. source images warped using the predicted deformations) and target images by computing the edge-based Hausdorff distance. The Hausdorff distance-based evaluation determines that our meshless models led to successful registration of the vast majority of the image features. PMID:26791945

  4. Combination of lateral and PA view radiographs to study development of knee OA and associated pain

    NASA Astrophysics Data System (ADS)

    Minciullo, Luca; Thomson, Jessie; Cootes, Timothy F.

    2017-03-01

    Knee Osteoarthritis (OA) is the most common form of arthritis, affecting millions of people around the world. The effects of the disease have been studied using the shape and texture features of bones in PosteriorAnterior (PA) and Lateral radiographs separately. In this work we compare the utility of features from each view, and evaluate whether combining features from both is advantageous. We built a fully automated system to independently locate landmark points in both radiographic images using Random Forest Constrained Local Models. We extracted discriminative features from the two bony outlines using Appearance Models. The features were used to train Random Forest classifiers to solve three specific tasks: (i) OA classification, distinguishing patients with structural signs of OA from the others; (ii) predicting future onset of the disease and (iii) predicting which patients with no current pain will have a positive pain score later in a follow-up visit. Using a subset of the MOST dataset we show that the PA view has more discriminative features to classify and predict OA, while the lateral view contains features that achieve better performance in predicting pain, and that combining the features from both views gives a small improvement in accuracy of the classification compared to the individual views.

  5. Land Covers Classification Based on Random Forest Method Using Features from Full-Waveform LIDAR Data

    NASA Astrophysics Data System (ADS)

    Ma, L.; Zhou, M.; Li, C.

    2017-09-01

    In this study, a Random Forest (RF) based land covers classification method is presented to predict the types of land covers in Miyun area. The returned full-waveforms which were acquired by a LiteMapper 5600 airborne LiDAR system were processed, including waveform filtering, waveform decomposition and features extraction. The commonly used features that were distance, intensity, Full Width at Half Maximum (FWHM), skewness and kurtosis were extracted. These waveform features were used as attributes of training data for generating the RF prediction model. The RF prediction model was applied to predict the types of land covers in Miyun area as trees, buildings, farmland and ground. The classification results of these four types of land covers were obtained according to the ground truth information acquired from CCD image data of the same region. The RF classification results were compared with that of SVM method and show better results. The RF classification accuracy reached 89.73% and the classification Kappa was 0.8631.

  6. Electroencephalogram-based decoding cognitive states using convolutional neural network and likelihood ratio based score fusion

    PubMed Central

    2017-01-01

    Electroencephalogram (EEG)-based decoding human brain activity is challenging, owing to the low spatial resolution of EEG. However, EEG is an important technique, especially for brain–computer interface applications. In this study, a novel algorithm is proposed to decode brain activity associated with different types of images. In this hybrid algorithm, convolutional neural network is modified for the extraction of features, a t-test is used for the selection of significant features and likelihood ratio-based score fusion is used for the prediction of brain activity. The proposed algorithm takes input data from multichannel EEG time-series, which is also known as multivariate pattern analysis. Comprehensive analysis was conducted using data from 30 participants. The results from the proposed method are compared with current recognized feature extraction and classification/prediction techniques. The wavelet transform-support vector machine method is the most popular currently used feature extraction and prediction method. This method showed an accuracy of 65.7%. However, the proposed method predicts the novel data with improved accuracy of 79.9%. In conclusion, the proposed algorithm outperformed the current feature extraction and prediction method. PMID:28558002

  7. Assessment of seasonal features based on Landsat time series for tree crown cover mapping in Burkina Faso

    NASA Astrophysics Data System (ADS)

    Liu, Jinxiu; Heiskanen, Janne; Aynekuly, Ermias; Pellikka, Petri

    2016-04-01

    Tree crown cover (CC) is an important vegetation attribute for land cover characterization, and for mapping and monitoring forest cover. Free data from Landsat and Sentinel-2 allow construction of fine resolution satellite image time series and extraction of seasonal features for predicting vegetation attributes. In the savannas, surface reflectance vary distinctively according to the rainy and dry seasons, and seasonal features are useful information for CC mapping. However, it is unclear if it is better to use spectral bands or vegetation indices (VI) for computation of seasonal features, and how feasible different VI are for CC prediction in the savanna woodlands and agroforestry parklands of West Africa. In this study, the objective was to compare seasonal features based on spectral bands and VI for CC mapping in southern Burkina Faso. A total of 35 Landsat images from November 2013 to October 2014 were processed. Seasonal features were computed using a harmonic model with three parameters (mean, amplitude and phase), and spectral bands, normalized difference vegetation index (NDVI), green normalized difference vegetation index (GNDVI), normalized difference water index (NDWI), tasseled cap (TC) indices (brightness, greenness, wetness) as input data. The seasonal features were employed to predict field estimated CC (n = 160) using Random Forest algorithm. The most accurate results were achieved when using seasonal features based on TC indices (R2: 0.65; RMSE: 10.7%) and spectral bands (R2: 0.64; RMSE: 10.8%). GNDVI performed better than NDVI or NDWI, and NDWI resulted in the poorest results (R2: 0.56; RMSE: 11.9%). The results indicate that spectral features should be carefully selected for CC prediction as shown by relatively poor performance of commonly used NDVI. The seasonal features based on three TC indices and all the spectral bands provided superior accuracy in comparison to single VI. The method presented in this study provides a feasible method to map CC based on seasonal features with possibility to integrate medium resolution satellite observation from several sensors (e.g. Landsat and Sentinel-2) in the future.

  8. Developing a NIR multispectral imaging for prediction and visualization of peanut protein content using variable selection algorithms

    NASA Astrophysics Data System (ADS)

    Cheng, Jun-Hu; Jin, Huali; Liu, Zhiwei

    2018-01-01

    The feasibility of developing a multispectral imaging method using important wavelengths from hyperspectral images selected by genetic algorithm (GA), successive projection algorithm (SPA) and regression coefficient (RC) methods for modeling and predicting protein content in peanut kernel was investigated for the first time. Partial least squares regression (PLSR) calibration model was established between the spectral data from the selected optimal wavelengths and the reference measured protein content ranged from 23.46% to 28.43%. The RC-PLSR model established using eight key wavelengths (1153, 1567, 1972, 2143, 2288, 2339, 2389 and 2446 nm) showed the best predictive results with the coefficient of determination of prediction (R2P) of 0.901, and root mean square error of prediction (RMSEP) of 0.108 and residual predictive deviation (RPD) of 2.32. Based on the obtained best model and image processing algorithms, the distribution maps of protein content were generated. The overall results of this study indicated that developing a rapid and online multispectral imaging system using the feature wavelengths and PLSR analysis is potential and feasible for determination of the protein content in peanut kernels.

  9. A memory learning framework for effective image retrieval.

    PubMed

    Han, Junwei; Ngan, King N; Li, Mingjing; Zhang, Hong-Jiang

    2005-04-01

    Most current content-based image retrieval systems are still incapable of providing users with their desired results. The major difficulty lies in the gap between low-level image features and high-level image semantics. To address the problem, this study reports a framework for effective image retrieval by employing a novel idea of memory learning. It forms a knowledge memory model to store the semantic information by simply accumulating user-provided interactions. A learning strategy is then applied to predict the semantic relationships among images according to the memorized knowledge. Image queries are finally performed based on a seamless combination of low-level features and learned semantics. One important advantage of our framework is its ability to efficiently annotate images and also propagate the keyword annotation from the labeled images to unlabeled images. The presented algorithm has been integrated into a practical image retrieval system. Experiments on a collection of 10,000 general-purpose images demonstrate the effectiveness of the proposed framework.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, P; Wang, J; Zhong, H

    Purpose: To evaluate the reproducibility of radiomics features by repeating computed tomographic (CT) scans in rectal cancer. To choose stable radiomics features for rectal cancer. Methods: 40 rectal cancer patients were enrolled in this study, each of whom underwent two CT scans within average 8.7 days (5 days to 17 days), before any treatment was delivered. The rectal gross tumor volume (GTV) was distinguished and segmented by an experienced oncologist in both CTs. Totally, more than 2000 radiomics features were defined in this study, which were divided into four groups (I: GLCM, II: GLRLM III: Wavelet GLCM and IV: Waveletmore » GLRLM). For each group, five types of features were extracted (Max slice: features from the largest slice of target images, Max value: features from all slices of target images and choose the maximum value, Min value: minimum value of features for all slices, Average value: average value of features for all slices, Matrix sum: all slices of target images translate into GLCM and GLRLM matrices and superpose all matrices, then extract features from the superposed matrix). Meanwhile a LOG (Laplace of Gauss) filter with different parameters was applied to these images. Concordance correlation coefficients (CCC) and inter-class correlation coefficients (ICC) were calculated to assess the reproducibility. Results: 403 radiomics features were extracted from each type of patients’ medical images. Features of average type are the most reproducible. Different filters have little effect for radiomics features. For the average type features, 253 out of 403 features (62.8%) showed high reproducibility (ICC≥0.8), 133 out of 403 features (33.0%) showed medium reproducibility (0.8≥ICC≥0.5) and 17 out of 403 features (4.2%) showed low reproducibility (ICC≥0.5). Conclusion: The average type radiomics features are the most stable features in rectal cancer. Further analysis of these features of rectal cancer can be warranted for treatment monitoring and prognosis prediction.« less

  11. Intratumor heterogeneity characterized by textural features on baseline 18F-FDG PET images predicts response to concomitant radiochemotherapy in esophageal cancer.

    PubMed

    Tixier, Florent; Le Rest, Catherine Cheze; Hatt, Mathieu; Albarghach, Nidal; Pradier, Olivier; Metges, Jean-Philippe; Corcos, Laurent; Visvikis, Dimitris

    2011-03-01

    (18)F-FDG PET is often used in clinical routine for diagnosis, staging, and response to therapy assessment or prediction. The standardized uptake value (SUV) in the primary or regional area is the most common quantitative measurement derived from PET images used for those purposes. The aim of this study was to propose and evaluate new parameters obtained by textural analysis of baseline PET scans for the prediction of therapy response in esophageal cancer. Forty-one patients with newly diagnosed esophageal cancer treated with combined radiochemotherapy were included in this study. All patients underwent pretreatment whole-body (18)F-FDG PET. Patients were treated with radiotherapy and alkylatinlike agents (5-fluorouracil-cisplatin or 5-fluorouracil-carboplatin). Patients were classified as nonresponders (progressive or stable disease), partial responders, or complete responders according to the Response Evaluation Criteria in Solid Tumors. Different image-derived indices obtained from the pretreatment PET tumor images were considered. These included usual indices such as maximum SUV, peak SUV, and mean SUV and a total of 38 features (such as entropy, size, and magnitude of local and global heterogeneous and homogeneous tumor regions) extracted from the 5 different textures considered. The capacity of each parameter to classify patients with respect to response to therapy was assessed using the Kruskal-Wallis test (P < 0.05). Specificity and sensitivity (including 95% confidence intervals) for each of the studied parameters were derived using receiver-operating-characteristic curves. Relationships between pairs of voxels, characterizing local tumor metabolic nonuniformities, were able to significantly differentiate all 3 patient groups (P < 0.0006). Regional measures of tumor characteristics, such as size of nonuniform metabolic regions and corresponding intensity nonuniformities within these regions, were also significant factors for prediction of response to therapy (P = 0.0002). Receiver-operating-characteristic curve analysis showed that tumor textural analysis can provide nonresponder, partial-responder, and complete-responder patient identification with higher sensitivity (76%-92%) than any SUV measurement. Textural features of tumor metabolic distribution extracted from baseline (18)F-FDG PET images allow for the best stratification of esophageal carcinoma patients in the context of therapy-response prediction.

  12. Multivariate Feature Selection of Image Descriptors Data for Breast Cancer with Computer-Assisted Diagnosis

    PubMed Central

    Galván-Tejada, Carlos E.; Zanella-Calzada, Laura A.; Galván-Tejada, Jorge I.; Celaya-Padilla, José M.; Gamboa-Rosales, Hamurabi; Garza-Veloz, Idalia; Martinez-Fierro, Margarita L.

    2017-01-01

    Breast cancer is an important global health problem, and the most common type of cancer among women. Late diagnosis significantly decreases the survival rate of the patient; however, using mammography for early detection has been demonstrated to be a very important tool increasing the survival rate. The purpose of this paper is to obtain a multivariate model to classify benign and malignant tumor lesions using a computer-assisted diagnosis with a genetic algorithm in training and test datasets from mammography image features. A multivariate search was conducted to obtain predictive models with different approaches, in order to compare and validate results. The multivariate models were constructed using: Random Forest, Nearest centroid, and K-Nearest Neighbor (K-NN) strategies as cost function in a genetic algorithm applied to the features in the BCDR public databases. Results suggest that the two texture descriptor features obtained in the multivariate model have a similar or better prediction capability to classify the data outcome compared with the multivariate model composed of all the features, according to their fitness value. This model can help to reduce the workload of radiologists and present a second opinion in the classification of tumor lesions. PMID:28216571

  13. Multivariate Feature Selection of Image Descriptors Data for Breast Cancer with Computer-Assisted Diagnosis.

    PubMed

    Galván-Tejada, Carlos E; Zanella-Calzada, Laura A; Galván-Tejada, Jorge I; Celaya-Padilla, José M; Gamboa-Rosales, Hamurabi; Garza-Veloz, Idalia; Martinez-Fierro, Margarita L

    2017-02-14

    Breast cancer is an important global health problem, and the most common type of cancer among women. Late diagnosis significantly decreases the survival rate of the patient; however, using mammography for early detection has been demonstrated to be a very important tool increasing the survival rate. The purpose of this paper is to obtain a multivariate model to classify benign and malignant tumor lesions using a computer-assisted diagnosis with a genetic algorithm in training and test datasets from mammography image features. A multivariate search was conducted to obtain predictive models with different approaches, in order to compare and validate results. The multivariate models were constructed using: Random Forest, Nearest centroid, and K-Nearest Neighbor (K-NN) strategies as cost function in a genetic algorithm applied to the features in the BCDR public databases. Results suggest that the two texture descriptor features obtained in the multivariate model have a similar or better prediction capability to classify the data outcome compared with the multivariate model composed of all the features, according to their fitness value. This model can help to reduce the workload of radiologists and present a second opinion in the classification of tumor lesions.

  14. TH-E-BRF-04: Characterizing the Response of Texture-Based CT Image Features for Quantification of Radiation-Induced Normal Lung Damage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krafft, S; Court, L; Briere, T

    2014-06-15

    Purpose: Radiation induced lung damage (RILD) is an important dose-limiting toxicity for patients treated with radiation therapy. Scoring systems for RILD are subjective and limit our ability to find robust predictors of toxicity. We investigate the dose and time-related response for texture-based lung CT image features that serve as potential quantitative measures of RILD. Methods: Pre- and post-RT diagnostic imaging studies were collected for retrospective analysis of 21 patients treated with photon or proton radiotherapy for NSCLC. Total lung and selected isodose contours (0–5, 5–15, 15–25Gy, etc.) were deformably registered from the treatment planning scan to the pre-RT and availablemore » follow-up CT studies for each patient. A CT image analysis framework was utilized to extract 3698 unique texture-based features (including co-occurrence and run length matrices) for each region of interest defined by the isodose contours and the total lung volume. Linear mixed models were fit to determine the relationship between feature change (relative to pre-RT), planned dose and time post-RT. Results: Seventy-three follow-up CT scans from 21 patients (median: 3 scans/patient) were analyzed to describe CT image feature change. At the p=0.05 level, dose affected feature change in 2706 (73.1%) of the available features. Similarly, time affected feature change in 408 (11.0%) of the available features. Both dose and time were significant predictors of feature change in a total of 231 (6.2%) of the extracted image features. Conclusion: Characterizing the dose and time-related response of a large number of texture-based CT image features is the first step toward identifying objective measures of lung toxicity necessary for assessment and prediction of RILD. There is evidence that numerous features are sensitive to both the radiation dose and time after RT. Beyond characterizing feature response, further investigation is warranted to determine the utility of these features as surrogates of clinically significant lung injury.« less

  15. Prediction of outcome using pretreatment 18F-FDG PET/CT and MRI radiomics in locally advanced cervical cancer treated with chemoradiotherapy.

    PubMed

    Lucia, François; Visvikis, Dimitris; Desseroit, Marie-Charlotte; Miranda, Omar; Malhaire, Jean-Pierre; Robin, Philippe; Pradier, Olivier; Hatt, Mathieu; Schick, Ulrike

    2018-05-01

    The aim of this study is to determine if radiomics features from 18 fluorodeoxyglucose (FDG) positron emission tomography/computed tomography (PET/CT) and magnetic resonance imaging (MRI) images could contribute to prognoses in cervical cancer. One hundred and two patients (69 for training and 33 for testing) with locally advanced cervical cancer (LACC) receiving chemoradiotherapy (CRT) from 08/2010 to 12/2016 were enrolled in this study. 18 F-FDG PET/CT and MRI examination [T1, T2, T1C, diffusion-weighted imaging (DWI)] were performed for each patient before CRT. Primary tumor volumes were delineated with the fuzzy locally adaptive Bayesian algorithm in the PET images and with 3D Slicer™ in the MRI images. Radiomics features (intensity, shape, and texture) were extracted and their prognostic value was compared with clinical parameters for recurrence-free and locoregional control. In the training cohort, median follow-up was 3.0 years (range, 0.43-6.56 years) and relapse occurred in 36% of patients. In univariate analysis, FIGO stage (I-II vs. III-IV) and metabolic response (complete vs. non-complete) were probably associated with outcome without reaching statistical significance, contrary to several radiomics features from both PET and MRI sequences. Multivariate analysis in training test identified Grey Level Non Uniformity GLRLM in PET and Entropy GLCM in ADC maps from DWI MRI as independent prognostic factors. These had significantly higher prognostic power than clinical parameters, as evaluated in the testing cohort with accuracy of 94% for predicting recurrence and 100% for predicting lack of loco-regional control (versus ~50-60% for clinical parameters). In LACC treated with CRT, radiomics features such as EntropyGLCM and GLNUGLRLM from functional imaging DWI-MRI and PET, respectively, are independent predictors of recurrence and loco-regional control with significantly higher prognostic power than usual clinical parameters. Further research is warranted for their validation, which may justify more aggressive treatment in patients identified with high probability of recurrence.

  16. Probability-Based Recognition Framework for Underwater Landmarks Using Sonar Images †.

    PubMed

    Lee, Yeongjun; Choi, Jinwoo; Ko, Nak Yong; Choi, Hyun-Taek

    2017-08-24

    This paper proposes a probability-based framework for recognizing underwater landmarks using sonar images. Current recognition methods use a single image, which does not provide reliable results because of weaknesses of the sonar image such as unstable acoustic source, many speckle noises, low resolution images, single channel image, and so on. However, using consecutive sonar images, if the status-i.e., the existence and identity (or name)-of an object is continuously evaluated by a stochastic method, the result of the recognition method is available for calculating the uncertainty, and it is more suitable for various applications. Our proposed framework consists of three steps: (1) candidate selection, (2) continuity evaluation, and (3) Bayesian feature estimation. Two probability methods-particle filtering and Bayesian feature estimation-are used to repeatedly estimate the continuity and feature of objects in consecutive images. Thus, the status of the object is repeatedly predicted and updated by a stochastic method. Furthermore, we develop an artificial landmark to increase detectability by an imaging sonar, which we apply to the characteristics of acoustic waves, such as instability and reflection depending on the roughness of the reflector surface. The proposed method is verified by conducting basin experiments, and the results are presented.

  17. Characterizing mammographic images by using generic texture features

    PubMed Central

    2012-01-01

    Introduction Although mammographic density is an established risk factor for breast cancer, its use is limited in clinical practice because of a lack of automated and standardized measurement methods. The aims of this study were to evaluate a variety of automated texture features in mammograms as risk factors for breast cancer and to compare them with the percentage mammographic density (PMD) by using a case-control study design. Methods A case-control study including 864 cases and 418 controls was analyzed automatically. Four hundred seventy features were explored as possible risk factors for breast cancer. These included statistical features, moment-based features, spectral-energy features, and form-based features. An elaborate variable selection process using logistic regression analyses was performed to identify those features that were associated with case-control status. In addition, PMD was assessed and included in the regression model. Results Of the 470 image-analysis features explored, 46 remained in the final logistic regression model. An area under the curve of 0.79, with an odds ratio per standard deviation change of 2.88 (95% CI, 2.28 to 3.65), was obtained with validation data. Adding the PMD did not improve the final model. Conclusions Using texture features to predict the risk of breast cancer appears feasible. PMD did not show any additional value in this study. With regard to the features assessed, most of the analysis tools appeared to reflect mammographic density, although some features did not correlate with PMD. It remains to be investigated in larger case-control studies whether these features can contribute to increased prediction accuracy. PMID:22490545

  18. Real-time model-based vision system for object acquisition and tracking

    NASA Technical Reports Server (NTRS)

    Wilcox, Brian; Gennery, Donald B.; Bon, Bruce; Litwin, Todd

    1987-01-01

    A machine vision system is described which is designed to acquire and track polyhedral objects moving and rotating in space by means of two or more cameras, programmable image-processing hardware, and a general-purpose computer for high-level functions. The image-processing hardware is capable of performing a large variety of operations on images and on image-like arrays of data. Acquisition utilizes image locations and velocities of the features extracted by the image-processing hardware to determine the three-dimensional position, orientation, velocity, and angular velocity of the object. Tracking correlates edges detected in the current image with edge locations predicted from an internal model of the object and its motion, continually updating velocity information to predict where edges should appear in future frames. With some 10 frames processed per second, real-time tracking is possible.

  19. Prediction of response to neoadjuvant chemotherapy in breast cancer: a radiomic study

    NASA Astrophysics Data System (ADS)

    Wu, Guolin; Fan, Ming; Zhang, Juan; Zheng, Bin; Li, Lihua

    2017-03-01

    Breast cancer is one of the most malignancies among women in worldwide. Neoadjuvant Chemotherapy (NACT) has gained interest and is increasingly used in treatment of breast cancer in recent years. Therefore, it is necessary to find a reliable non-invasive assessment and prediction method which can evaluate and predict the response of NACT. Recent studies have highlighted the use of MRI for predicting response to NACT. In addition, molecular subtype could also effectively identify patients who are likely have better prognosis in breast cancer. In this study, a radiomic analysis were performed, by extracting features from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) and immunohistochemistry (IHC) to determine subtypes. A dataset with fifty-seven breast cancer patients were included, all of them received preoperative MRI examination. Among them, 47 patients had complete response (CR) or partial response (PR) and 10 had stable disease (SD) to chemotherapy based on the RECIST criterion. A total of 216 imaging features including statistical characteristics, morphology, texture and dynamic enhancement were extracted from DCE-MRI. In multivariate analysis, the proposed imaging predictors achieved an AUC of 0.923 (P = 0.0002) in leave-one-out crossvalidation. The performance of the classifier increased to 0.960, 0.950 and 0.936 when status of HER2, Luminal A and Luminal B subtypes were added into the statistic model, respectively. The results of this study demonstrated that IHC determined molecular status combined with radiomic features from DCE-MRI could be used as clinical marker that is associated with response to NACT.

  20. Introducing two Random Forest based methods for cloud detection in remote sensing images

    NASA Astrophysics Data System (ADS)

    Ghasemian, Nafiseh; Akhoondzadeh, Mehdi

    2018-07-01

    Cloud detection is a necessary phase in satellite images processing to retrieve the atmospheric and lithospheric parameters. Currently, some cloud detection methods based on Random Forest (RF) model have been proposed but they do not consider both spectral and textural characteristics of the image. Furthermore, they have not been tested in the presence of snow/ice. In this paper, we introduce two RF based algorithms, Feature Level Fusion Random Forest (FLFRF) and Decision Level Fusion Random Forest (DLFRF) to incorporate visible, infrared (IR) and thermal spectral and textural features (FLFRF) including Gray Level Co-occurrence Matrix (GLCM) and Robust Extended Local Binary Pattern (RELBP_CI) or visible, IR and thermal classifiers (DLFRF) for highly accurate cloud detection on remote sensing images. FLFRF first fuses visible, IR and thermal features. Thereafter, it uses the RF model to classify pixels to cloud, snow/ice and background or thick cloud, thin cloud and background. DLFRF considers visible, IR and thermal features (both spectral and textural) separately and inserts each set of features to RF model. Then, it holds vote matrix of each run of the model. Finally, it fuses the classifiers using the majority vote method. To demonstrate the effectiveness of the proposed algorithms, 10 Terra MODIS and 15 Landsat 8 OLI/TIRS images with different spatial resolutions are used in this paper. Quantitative analyses are based on manually selected ground truth data. Results show that after adding RELBP_CI to input feature set cloud detection accuracy improves. Also, the average cloud kappa values of FLFRF and DLFRF on MODIS images (1 and 0.99) are higher than other machine learning methods, Linear Discriminate Analysis (LDA), Classification And Regression Tree (CART), K Nearest Neighbor (KNN) and Support Vector Machine (SVM) (0.96). The average snow/ice kappa values of FLFRF and DLFRF on MODIS images (1 and 0.85) are higher than other traditional methods. The quantitative values on Landsat 8 images show similar trend. Consequently, while SVM and K-nearest neighbor show overestimation in predicting cloud and snow/ice pixels, our Random Forest (RF) based models can achieve higher cloud, snow/ice kappa values on MODIS and thin cloud, thick cloud and snow/ice kappa values on Landsat 8 images. Our algorithms predict both thin and thick cloud on Landsat 8 images while the existing cloud detection algorithm, Fmask cannot discriminate them. Compared to the state-of-the-art methods, our algorithms have acquired higher average cloud and snow/ice kappa values for different spatial resolutions.

  1. Reduced reference image quality assessment via sub-image similarity based redundancy measurement

    NASA Astrophysics Data System (ADS)

    Mou, Xuanqin; Xue, Wufeng; Zhang, Lei

    2012-03-01

    The reduced reference (RR) image quality assessment (IQA) has been attracting much attention from researchers for its loyalty to human perception and flexibility in practice. A promising RR metric should be able to predict the perceptual quality of an image accurately while using as few features as possible. In this paper, a novel RR metric is presented, whose novelty lies in two aspects. Firstly, it measures the image redundancy by calculating the so-called Sub-image Similarity (SIS), and the image quality is measured by comparing the SIS between the reference image and the test image. Secondly, the SIS is computed by the ratios of NSE (Non-shift Edge) between pairs of sub-images. Experiments on two IQA databases (i.e. LIVE and CSIQ databases) show that by using only 6 features, the proposed metric can work very well with high correlations between the subjective and objective scores. In particular, it works consistently well across all the distortion types.

  2. Deep 3D Convolutional Encoder Networks With Shortcuts for Multiscale Feature Integration Applied to Multiple Sclerosis Lesion Segmentation.

    PubMed

    Brosch, Tom; Tang, Lisa Y W; Youngjin Yoo; Li, David K B; Traboulsee, Anthony; Tam, Roger

    2016-05-01

    We propose a novel segmentation approach based on deep 3D convolutional encoder networks with shortcut connections and apply it to the segmentation of multiple sclerosis (MS) lesions in magnetic resonance images. Our model is a neural network that consists of two interconnected pathways, a convolutional pathway, which learns increasingly more abstract and higher-level image features, and a deconvolutional pathway, which predicts the final segmentation at the voxel level. The joint training of the feature extraction and prediction pathways allows for the automatic learning of features at different scales that are optimized for accuracy for any given combination of image types and segmentation task. In addition, shortcut connections between the two pathways allow high- and low-level features to be integrated, which enables the segmentation of lesions across a wide range of sizes. We have evaluated our method on two publicly available data sets (MICCAI 2008 and ISBI 2015 challenges) with the results showing that our method performs comparably to the top-ranked state-of-the-art methods, even when only relatively small data sets are available for training. In addition, we have compared our method with five freely available and widely used MS lesion segmentation methods (EMS, LST-LPA, LST-LGA, Lesion-TOADS, and SLS) on a large data set from an MS clinical trial. The results show that our method consistently outperforms these other methods across a wide range of lesion sizes.

  3. What automated age estimation of hand and wrist MRI data tells us about skeletal maturation in male adolescents.

    PubMed

    Urschler, Martin; Grassegger, Sabine; Štern, Darko

    2015-01-01

    Age estimation of individuals is important in human biology and has various medical and forensic applications. Recent interest in MR-based methods aims to investigate alternatives for established methods involving ionising radiation. Automatic, software-based methods additionally promise improved estimation objectivity. To investigate how informative automatically selected image features are regarding their ability to discriminate age, by exploring a recently proposed software-based age estimation method for MR images of the left hand and wrist. One hundred and two MR datasets of left hand images are used to evaluate age estimation performance, consisting of bone and epiphyseal gap volume localisation, computation of one age regression model per bone mapping image features to age and fusion of individual bone age predictions to a final age estimate. Quantitative results of the software-based method show an age estimation performance with a mean absolute difference of 0.85 years (SD = 0.58 years) to chronological age, as determined by a cross-validation experiment. Qualitatively, it is demonstrated how feature selection works and which image features of skeletal maturation are automatically chosen to model the non-linear regression function. Feasibility of automatic age estimation based on MRI data is shown and selected image features are found to be informative for describing anatomical changes during physical maturation in male adolescents.

  4. Interpretation of fingerprint image quality features extracted by self-organizing maps

    NASA Astrophysics Data System (ADS)

    Danov, Ivan; Olsen, Martin A.; Busch, Christoph

    2014-05-01

    Accurate prediction of fingerprint quality is of significant importance to any fingerprint-based biometric system. Ensuring high quality samples for both probe and reference can substantially improve the system's performance by lowering false non-matches, thus allowing finer adjustment of the decision threshold of the biometric system. Furthermore, the increasing usage of biometrics in mobile contexts demands development of lightweight methods for operational environment. A novel two-tier computationally efficient approach was recently proposed based on modelling block-wise fingerprint image data using Self-Organizing Map (SOM) to extract specific ridge pattern features, which are then used as an input to a Random Forests (RF) classifier trained to predict the quality score of a propagated sample. This paper conducts an investigative comparative analysis on a publicly available dataset for the improvement of the two-tier approach by proposing additionally three feature interpretation methods, based respectively on SOM, Generative Topographic Mapping and RF. The analysis shows that two of the proposed methods produce promising results on the given dataset.

  5. Classification of brain tumors using texture based analysis of T1-post contrast MR scans in a preclinical model

    NASA Astrophysics Data System (ADS)

    Tang, Tien T.; Zawaski, Janice A.; Francis, Kathleen N.; Qutub, Amina A.; Gaber, M. Waleed

    2018-02-01

    Accurate diagnosis of tumor type is vital for effective treatment planning. Diagnosis relies heavily on tumor biopsies and other clinical factors. However, biopsies do not fully capture the tumor's heterogeneity due to sampling bias and are only performed if the tumor is accessible. An alternative approach is to use features derived from routine diagnostic imaging such as magnetic resonance (MR) imaging. In this study we aim to establish the use of quantitative image features to classify brain tumors and extend the use of MR images beyond tumor detection and localization. To control for interscanner, acquisition and reconstruction protocol variations, the established workflow was performed in a preclinical model. Using glioma (U87 and GL261) and medulloblastoma (Daoy) models, T1-weighted post contrast scans were acquired at different time points post-implant. The tumor regions at the center, middle, and peripheral were analyzed using in-house software to extract 32 different image features consisting of first and second order features. The extracted features were used to construct a decision tree, which could predict tumor type with 10-fold cross-validation. Results from the final classification model demonstrated that middle tumor region had the highest overall accuracy at 79%, while the AUC accuracy was over 90% for GL261 and U87 tumors. Our analysis further identified image features that were unique to certain tumor region, although GL261 tumors were more homogenous with no significant differences between the central and peripheral tumor regions. In conclusion our study shows that texture features derived from MR scans can be used to classify tumor type with high success rates. Furthermore, the algorithm we have developed can be implemented with any imaging datasets and may be applicable to multiple tumor types to determine diagnosis.

  6. A unified framework for automatic wound segmentation and analysis with deep convolutional neural networks.

    PubMed

    Wang, Changhan; Yan, Xinchen; Smith, Max; Kochhar, Kanika; Rubin, Marcie; Warren, Stephen M; Wrobel, James; Lee, Honglak

    2015-01-01

    Wound surface area changes over multiple weeks are highly predictive of the wound healing process. Furthermore, the quality and quantity of the tissue in the wound bed also offer important prognostic information. Unfortunately, accurate measurements of wound surface area changes are out of reach in the busy wound practice setting. Currently, clinicians estimate wound size by estimating wound width and length using a scalpel after wound treatment, which is highly inaccurate. To address this problem, we propose an integrated system to automatically segment wound regions and analyze wound conditions in wound images. Different from previous segmentation techniques which rely on handcrafted features or unsupervised approaches, our proposed deep learning method jointly learns task-relevant visual features and performs wound segmentation. Moreover, learned features are applied to further analysis of wounds in two ways: infection detection and healing progress prediction. To the best of our knowledge, this is the first attempt to automate long-term predictions of general wound healing progress. Our method is computationally efficient and takes less than 5 seconds per wound image (480 by 640 pixels) on a typical laptop computer. Our evaluations on a large-scale wound database demonstrate the effectiveness and reliability of the proposed system.

  7. Prostate tissue characterization/classification in 144 patient population using wavelet and higher order spectra features from transrectal ultrasound images.

    PubMed

    Pareek, Gyan; Acharya, U Rajendra; Sree, S Vinitha; Swapna, G; Yantri, Ratna; Martis, Roshan Joy; Saba, Luca; Krishnamurthi, Ganapathy; Mallarini, Giorgio; El-Baz, Ayman; Al Ekish, Shadi; Beland, Michael; Suri, Jasjit S

    2013-12-01

    In this work, we have proposed an on-line computer-aided diagnostic system called "UroImage" that classifies a Transrectal Ultrasound (TRUS) image into cancerous or non-cancerous with the help of non-linear Higher Order Spectra (HOS) features and Discrete Wavelet Transform (DWT) coefficients. The UroImage system consists of an on-line system where five significant features (one DWT-based feature and four HOS-based features) are extracted from the test image. These on-line features are transformed by the classifier parameters obtained using the training dataset to determine the class. We trained and tested six classifiers. The dataset used for evaluation had 144 TRUS images which were split into training and testing sets. Three-fold and ten-fold cross-validation protocols were adopted for training and estimating the accuracy of the classifiers. The ground truth used for training was obtained using the biopsy results. Among the six classifiers, using 10-fold cross-validation technique, Support Vector Machine and Fuzzy Sugeno classifiers presented the best classification accuracy of 97.9% with equally high values for sensitivity, specificity and positive predictive value. Our proposed automated system, which achieved more than 95% values for all the performance measures, can be an adjunct tool to provide an initial diagnosis for the identification of patients with prostate cancer. The technique, however, is limited by the limitations of 2D ultrasound guided biopsy, and we intend to improve our technique by using 3D TRUS images in the future.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bagher-Ebadian, H; Chetty, I; Liu, C

    Purpose: To examine the impact of image smoothing and noise on the robustness of textural information extracted from CBCT images for prediction of radiotherapy response for patients with head/neck (H/N) cancers. Methods: CBCT image datasets for 14 patients with H/N cancer treated with radiation (70 Gy in 35 fractions) were investigated. A deformable registration algorithm was used to fuse planning CT’s to CBCT’s. Tumor volume was automatically segmented on each CBCT image dataset. Local control at 1-year was used to classify 8 patients as responders (R), and 6 as non-responders (NR). A smoothing filter [2D Adaptive Weiner (2DAW) with 3more » different windows (ψ=3, 5, and 7)], and two noise models (Poisson and Gaussian, SNR=25) were implemented, and independently applied to CBCT images. Twenty-two textural features, describing the spatial arrangement of voxel intensities calculated from gray-level co-occurrence matrices, were extracted for all tumor volumes. Results: Relative to CBCT images without smoothing, none of 22 textural features extracted showed any significant differences when smoothing was applied (using the 2DAW with filtering parameters of ψ=3 and 5), in the responder and non-responder groups. When smoothing, 2DAW with ψ=7 was applied, one textural feature, Information Measure of Correlation, was significantly different relative to no smoothing. Only 4 features (Energy, Entropy, Homogeneity, and Maximum-Probability) were found to be statistically different between the R and NR groups (Table 1). These features remained statistically significant discriminators for R and NR groups in presence of noise and smoothing. Conclusion: This preliminary work suggests that textural classifiers for response prediction, extracted from H&N CBCT images, are robust to low-power noise and low-pass filtering. While other types of filters will alter the spatial frequencies differently, these results are promising. The current study is subject to Type II errors. A much larger cohort of patients is needed to confirm these results. This work was supported in part by a grant from Varian Medical Systems (Palo Alto, CA)« less

  9. Transfer learning for bimodal biometrics recognition

    NASA Astrophysics Data System (ADS)

    Dan, Zhiping; Sun, Shuifa; Chen, Yanfei; Gan, Haitao

    2013-10-01

    Biometrics recognition aims to identify and predict new personal identities based on their existing knowledge. As the use of multiple biometric traits of the individual may enables more information to be used for recognition, it has been proved that multi-biometrics can produce higher accuracy than single biometrics. However, a common problem with traditional machine learning is that the training and test data should be in the same feature space, and have the same underlying distribution. If the distributions and features are different between training and future data, the model performance often drops. In this paper, we propose a transfer learning method for face recognition on bimodal biometrics. The training and test samples of bimodal biometric images are composed of the visible light face images and the infrared face images. Our algorithm transfers the knowledge across feature spaces, relaxing the assumption of same feature space as well as same underlying distribution by automatically learning a mapping between two different but somewhat similar face images. According to the experiments in the face images, the results show that the accuracy of face recognition has been greatly improved by the proposed method compared with the other previous methods. It demonstrates the effectiveness and robustness of our method.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lamichhane, N; Johnson, P; Chinea, F

    Purpose: To evaluate the correlation between image features and the accuracy of manually drawn target contours on synthetic PET images Methods: A digital PET phantom was used in combination with Monte Carlo simulation to create a set of 26 simulated PET images featuring a variety of tumor shapes and activity heterogeneity. These tumor volumes were used as a gold standard in comparisons with manual contours delineated by 10 radiation oncologist on the simulated PET images. Metrics used to evaluate segmentation accuracy included the dice coefficient, false positive dice, false negative dice, symmetric mean absolute surface distance, and absolute volumetric difference.more » Image features extracted from the simulated tumors consisted of volume, shape complexity, mean curvature, and intensity contrast along with five texture features derived from the gray-level neighborhood difference matrices including contrast, coarseness, busyness, strength, and complexity. Correlation between these features and contouring accuracy were examined. Results: Contour accuracy was reasonably well correlated with a variety of image features. Dice coefficient ranged from 0.7 to 0.90 and was correlated closely with contrast (r=0.43, p=0.02) and complexity (r=0.5, p<0.001). False negative dice ranged from 0.10 to 0.50 and was correlated closely with contrast (r=0.68, p<0.001) and complexity (r=0.66, p<0.001). Absolute volumetric difference ranged from 0.0002 to 0.67 and was correlated closely with coarseness (r=0.46, p=0.02) and complexity (r=0.49, p=0.008). Symmetric mean absolute difference ranged from 0.02 to 1 and was correlated closely with mean curvature (r=0.57, p=0.02) and contrast (r=0.6, p=0.001). Conclusion: The long term goal of this study is to assess whether contouring variability can be reduced by providing feedback to the practitioner based on image feature analysis. The results are encouraging and will be used to develop a statistical model which will enable a prediction of contour accuracy based purely on image feature analysis.« less

  11. Biomechanical model for computing deformations for whole-body image registration: A meshless approach.

    PubMed

    Li, Mao; Miller, Karol; Joldes, Grand Roman; Kikinis, Ron; Wittek, Adam

    2016-12-01

    Patient-specific biomechanical models have been advocated as a tool for predicting deformations of soft body organs/tissue for medical image registration (aligning two sets of images) when differences between the images are large. However, complex and irregular geometry of the body organs makes generation of patient-specific biomechanical models very time-consuming. Meshless discretisation has been proposed to solve this challenge. However, applications so far have been limited to 2D models and computing single organ deformations. In this study, 3D comprehensive patient-specific nonlinear biomechanical models implemented using meshless Total Lagrangian explicit dynamics algorithms are applied to predict a 3D deformation field for whole-body image registration. Unlike a conventional approach that requires dividing (segmenting) the image into non-overlapping constituents representing different organs/tissues, the mechanical properties are assigned using the fuzzy c-means algorithm without the image segmentation. Verification indicates that the deformations predicted using the proposed meshless approach are for practical purposes the same as those obtained using the previously validated finite element models. To quantitatively evaluate the accuracy of the predicted deformations, we determined the spatial misalignment between the registered (i.e. source images warped using the predicted deformations) and target images by computing the edge-based Hausdorff distance. The Hausdorff distance-based evaluation determines that our meshless models led to successful registration of the vast majority of the image features. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Predicting the Valence of a Scene from Observers’ Eye Movements

    PubMed Central

    R.-Tavakoli, Hamed; Atyabi, Adham; Rantanen, Antti; Laukka, Seppo J.; Nefti-Meziani, Samia; Heikkilä, Janne

    2015-01-01

    Multimedia analysis benefits from understanding the emotional content of a scene in a variety of tasks such as video genre classification and content-based image retrieval. Recently, there has been an increasing interest in applying human bio-signals, particularly eye movements, to recognize the emotional gist of a scene such as its valence. In order to determine the emotional category of images using eye movements, the existing methods often learn a classifier using several features that are extracted from eye movements. Although it has been shown that eye movement is potentially useful for recognition of scene valence, the contribution of each feature is not well-studied. To address the issue, we study the contribution of features extracted from eye movements in the classification of images into pleasant, neutral, and unpleasant categories. We assess ten features and their fusion. The features are histogram of saccade orientation, histogram of saccade slope, histogram of saccade length, histogram of saccade duration, histogram of saccade velocity, histogram of fixation duration, fixation histogram, top-ten salient coordinates, and saliency map. We utilize machine learning approach to analyze the performance of features by learning a support vector machine and exploiting various feature fusion schemes. The experiments reveal that ‘saliency map’, ‘fixation histogram’, ‘histogram of fixation duration’, and ‘histogram of saccade slope’ are the most contributing features. The selected features signify the influence of fixation information and angular behavior of eye movements in the recognition of the valence of images. PMID:26407322

  13. No-reference quality assessment based on visual perception

    NASA Astrophysics Data System (ADS)

    Li, Junshan; Yang, Yawei; Hu, Shuangyan; Zhang, Jiao

    2014-11-01

    The visual quality assessment of images/videos is an ongoing hot research topic, which has become more and more important for numerous image and video processing applications with the rapid development of digital imaging and communication technologies. The goal of image quality assessment (IQA) algorithms is to automatically assess the quality of images/videos in agreement with human quality judgments. Up to now, two kinds of models have been used for IQA, namely full-reference (FR) and no-reference (NR) models. For FR models, IQA algorithms interpret image quality as fidelity or similarity with a perfect image in some perceptual space. However, the reference image is not available in many practical applications, and a NR IQA approach is desired. Considering natural vision as optimized by the millions of years of evolutionary pressure, many methods attempt to achieve consistency in quality prediction by modeling salient physiological and psychological features of the human visual system (HVS). To reach this goal, researchers try to simulate HVS with image sparsity coding and supervised machine learning, which are two main features of HVS. A typical HVS captures the scenes by sparsity coding, and uses experienced knowledge to apperceive objects. In this paper, we propose a novel IQA approach based on visual perception. Firstly, a standard model of HVS is studied and analyzed, and the sparse representation of image is accomplished with the model; and then, the mapping correlation between sparse codes and subjective quality scores is trained with the regression technique of least squaresupport vector machine (LS-SVM), which gains the regressor that can predict the image quality; the visual metric of image is predicted with the trained regressor at last. We validate the performance of proposed approach on Laboratory for Image and Video Engineering (LIVE) database, the specific contents of the type of distortions present in the database are: 227 images of JPEG2000, 233 images of JPEG, 174 images of White Noise, 174 images of Gaussian Blur, 174 images of Fast Fading. The database includes subjective differential mean opinion score (DMOS) for each image. The experimental results show that the proposed approach not only can assess many kinds of distorted images quality, but also exhibits a superior accuracy and monotonicity.

  14. Body image and eating disordered behavior in a community sample of Black and Hispanic women.

    PubMed

    Hrabosky, Joshua I; Grilo, Carlos M

    2007-01-01

    The current study examined body image concerns and eating disordered behaviors in a community sample of Black and Hispanic women. In addition, this study explored whether there are ethnic differences in the correlates or in the prediction of body image concerns. Participants were 120 (67 Black and 53 Hispanic) women who responded to advertisements to participate in a study of women and health. Participants completed a battery of established self-report measures to assess body image, eating disordered behaviors, and associated psychological domains. Black and Hispanic women did not differ significantly in their self-reports of body image, eating disordered behaviors, or associated psychological measures. Comparisons performed separately within both ethnic groups revealed significant differences by weight status, with a general graded patterning of greater concerns in obese than overweight than average weight groups. In terms of predicting body image, multiple regression analyses testing a number of variables, including BMI, performed separately for Black and Hispanic women revealed that eating concern and depressive affect were significant predictors of body image concern for both groups. Overall, Black and Hispanic women differed little in their self-reports of body image, eating-disordered features, and depressive affect. Higher weight was associated with a general pattern of increased body image concerns and features of eating disorders in both groups and with binge eating in Black women. Eating concerns and depressive affect emerged as significant independent predictors of body image for both ethnic groups.

  15. Predictive local receptive fields based respiratory motion tracking for motion-adaptive radiotherapy.

    PubMed

    Yubo Wang; Tatinati, Sivanagaraja; Liyu Huang; Kim Jeong Hong; Shafiq, Ghufran; Veluvolu, Kalyana C; Khong, Andy W H

    2017-07-01

    Extracranial robotic radiotherapy employs external markers and a correlation model to trace the tumor motion caused by the respiration. The real-time tracking of tumor motion however requires a prediction model to compensate the latencies induced by the software (image data acquisition and processing) and hardware (mechanical and kinematic) limitations of the treatment system. A new prediction algorithm based on local receptive fields extreme learning machines (pLRF-ELM) is proposed for respiratory motion prediction. All the existing respiratory motion prediction methods model the non-stationary respiratory motion traces directly to predict the future values. Unlike these existing methods, the pLRF-ELM performs prediction by modeling the higher-level features obtained by mapping the raw respiratory motion into the random feature space of ELM instead of directly modeling the raw respiratory motion. The developed method is evaluated using the dataset acquired from 31 patients for two horizons in-line with the latencies of treatment systems like CyberKnife. Results showed that pLRF-ELM is superior to that of existing prediction methods. Results further highlight that the abstracted higher-level features are suitable to approximate the nonlinear and non-stationary characteristics of respiratory motion for accurate prediction.

  16. Experimental demonstration of Klyshko's advanced-wave picture using a coincidence-count based, camera-enabled imaging system

    NASA Astrophysics Data System (ADS)

    Aspden, Reuben S.; Tasca, Daniel S.; Forbes, Andrew; Boyd, Robert W.; Padgett, Miles J.

    2014-04-01

    The Klyshko advanced-wave picture is a well-known tool useful in the conceptualisation of parametric down-conversion (SPDC) experiments. Despite being well-known and understood, there have been few experimental demonstrations illustrating its validity. Here, we present an experimental demonstration of this picture using a time-gated camera in an image-based coincidence measurement. We show an excellent agreement between the spatial distributions as predicted by the Klyshko picture and those obtained using the SPDC photon pairs. An interesting speckle feature is present in the Klyshko predictive images due to the spatial coherence of the back-propagated beam in the multi-mode fibre. This effect can be removed by mechanically twisting the fibre, thus degrading the spatial coherence of the beam and time-averaging the speckle pattern, giving an accurate correspondence between the predictive and SPDC images.

  17. Lossless Compression of JPEG Coded Photo Collections.

    PubMed

    Wu, Hao; Sun, Xiaoyan; Yang, Jingyu; Zeng, Wenjun; Wu, Feng

    2016-04-06

    The explosion of digital photos has posed a significant challenge to photo storage and transmission for both personal devices and cloud platforms. In this paper, we propose a novel lossless compression method to further reduce the size of a set of JPEG coded correlated images without any loss of information. The proposed method jointly removes inter/intra image redundancy in the feature, spatial, and frequency domains. For each collection, we first organize the images into a pseudo video by minimizing the global prediction cost in the feature domain. We then present a hybrid disparity compensation method to better exploit both the global and local correlations among the images in the spatial domain. Furthermore, the redundancy between each compensated signal and the corresponding target image is adaptively reduced in the frequency domain. Experimental results demonstrate the effectiveness of the proposed lossless compression method. Compared to the JPEG coded image collections, our method achieves average bit savings of more than 31%.

  18. Association between mammogram density and background parenchymal enhancement of breast MRI

    NASA Astrophysics Data System (ADS)

    Aghaei, Faranak; Danala, Gopichandh; Wang, Yunzhi; Zarafshani, Ali; Qian, Wei; Liu, Hong; Zheng, Bin

    2018-02-01

    Breast density has been widely considered as an important risk factor for breast cancer. The purpose of this study is to examine the association between mammogram density results and background parenchymal enhancement (BPE) of breast MRI. A dataset involving breast MR images was acquired from 65 high-risk women. Based on mammography density (BIRADS) results, the dataset was divided into two groups of low and high breast density cases. The Low-Density group has 15 cases with mammographic density (BIRADS 1 and 2), while the High-density group includes 50 cases, which were rated by radiologists as mammographic density BIRADS 3 and 4. A computer-aided detection (CAD) scheme was applied to segment and register breast regions depicted on sequential images of breast MRI scans. CAD scheme computed 20 global BPE features from the entire two breast regions, separately from the left and right breast region, as well as from the bilateral difference between left and right breast regions. An image feature selection method namely, CFS method, was applied to remove the most redundant features and select optimal features from the initial feature pool. Then, a logistic regression classifier was built using the optimal features to predict the mammogram density from the BPE features. Using a leave-one-case-out validation method, the classifier yields the accuracy of 82% and area under ROC curve, AUC=0.81+/-0.09. Also, the box-plot based analysis shows a negative association between mammogram density results and BPE features in the MRI images. This study demonstrated a negative association between mammogram density and BPE of breast MRI images.

  19. Quantitative Analysis of the Cervical Texture by Ultrasound and Correlation with Gestational Age.

    PubMed

    Baños, Núria; Perez-Moreno, Alvaro; Migliorelli, Federico; Triginer, Laura; Cobo, Teresa; Bonet-Carne, Elisenda; Gratacos, Eduard; Palacio, Montse

    2017-01-01

    Quantitative texture analysis has been proposed to extract robust features from the ultrasound image to detect subtle changes in the textures of the images. The aim of this study was to evaluate the feasibility of quantitative cervical texture analysis to assess cervical tissue changes throughout pregnancy. This was a cross-sectional study including singleton pregnancies between 20.0 and 41.6 weeks of gestation from women who delivered at term. Cervical length was measured, and a selected region of interest in the cervix was delineated. A model to predict gestational age based on features extracted from cervical images was developed following three steps: data splitting, feature transformation, and regression model computation. Seven hundred images, 30 per gestational week, were included for analysis. There was a strong correlation between the gestational age at which the images were obtained and the estimated gestational age by quantitative analysis of the cervical texture (R = 0.88). This study provides evidence that quantitative analysis of cervical texture can extract features from cervical ultrasound images which correlate with gestational age. Further research is needed to evaluate its applicability as a biomarker of the risk of spontaneous preterm birth, as well as its role in cervical assessment in other clinical situations in which cervical evaluation might be relevant. © 2016 S. Karger AG, Basel.

  20. Improving the mapping of crop types in the Midwestern U.S. by fusing Landsat and MODIS satellite data

    NASA Astrophysics Data System (ADS)

    Zhu, Likai; Radeloff, Volker C.; Ives, Anthony R.

    2017-06-01

    Mapping crop types is of great importance for assessing agricultural production, land-use patterns, and the environmental effects of agriculture. Indeed, both radiometric and spatial resolution of Landsat's sensors images are optimized for cropland monitoring. However, accurate mapping of crop types requires frequent cloud-free images during the growing season, which are often not available, and this raises the question of whether Landsat data can be combined with data from other satellites. Here, our goal is to evaluate to what degree fusing Landsat with MODIS Nadir Bidirectional Reflectance Distribution Function (BRDF)-Adjusted Reflectance (NBAR) data can improve crop-type classification. Choosing either one or two images from all cloud-free Landsat observations available for the Arlington Agricultural Research Station area in Wisconsin from 2010 to 2014, we generated 87 combinations of images, and used each combination as input into the Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM) algorithm to predict Landsat-like images at the nominal dates of each 8-day MODIS NBAR product. Both the original Landsat and STARFM-predicted images were then classified with a support vector machine (SVM), and we compared the classification errors of three scenarios: 1) classifying the one or two original Landsat images of each combination only, 2) classifying the one or two original Landsat images plus all STARFM-predicted images, and 3) classifying the one or two original Landsat images together with STARFM-predicted images for key dates. Our results indicated that using two Landsat images as the input of STARFM did not significantly improve the STARFM predictions compared to using only one, and predictions using Landsat images between July and August as input were most accurate. Including all STARFM-predicted images together with the Landsat images significantly increased average classification error by 4% points (from 21% to 25%) compared to using only Landsat images. However, incorporating only STARFM-predicted images for key dates decreased average classification error by 2% points (from 21% to 19%) compared to using only Landsat images. In particular, if only a single Landsat image was available, adding STARFM predictions for key dates significantly decreased the average classification error by 4 percentage points from 30% to 26% (p < 0.05). We conclude that adding STARFM-predicted images can be effective for improving crop-type classification when only limited Landsat observations are available, but carefully selecting images from a full set of STARFM predictions is crucial. We developed an approach to identify the optimal subsets of all STARFM predictions, which gives an alternative method of feature selection for future research.

  1. Non-destructive assessment of instrumental and sensory tenderness of lamb meat using NIR hyperspectral imaging.

    PubMed

    Kamruzzaman, Mohammed; Elmasry, Gamal; Sun, Da-Wen; Allen, Paul

    2013-11-01

    The purpose of this study was to develop and test a hyperspectral imaging system (900-1700 nm) to predict instrumental and sensory tenderness of lamb meat. Warner-Bratzler shear force (WBSF) values and sensory scores by trained panellists were collected as the indicator of instrumental and sensory tenderness, respectively. Partial least squares regression models were developed for predicting instrumental and sensory tenderness with reasonable accuracy (Rcv=0.84 for WBSF and 0.69 for sensory tenderness). Overall, the results confirmed that the spectral data could become an interesting screening tool to quickly categorise lamb steaks in good (i.e. tender) and bad (i.e. tough) based on WBSF values and sensory scores with overall accuracy of about 94.51% and 91%, respectively. Successive projections algorithm (SPA) was used to select the most important wavelengths for WBSF prediction. Additionally, textural features from Gray Level Co-occurrence Matrix (GLCM) were extracted to determine the correlation between textural features and WBSF values. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Computer-aided screening system for cervical precancerous cells based on field emission scanning electron microscopy and energy dispersive x-ray images and spectra

    NASA Astrophysics Data System (ADS)

    Jusman, Yessi; Ng, Siew-Cheok; Hasikin, Khairunnisa; Kurnia, Rahmadi; Osman, Noor Azuan Bin Abu; Teoh, Kean Hooi

    2016-10-01

    The capability of field emission scanning electron microscopy and energy dispersive x-ray spectroscopy (FE-SEM/EDX) to scan material structures at the microlevel and characterize the material with its elemental properties has inspired this research, which has developed an FE-SEM/EDX-based cervical cancer screening system. The developed computer-aided screening system consisted of two parts, which were the automatic features of extraction and classification. For the automatic features extraction algorithm, the image and spectra of cervical cells features extraction algorithm for extracting the discriminant features of FE-SEM/EDX data was introduced. The system automatically extracted two types of features based on FE-SEM/EDX images and FE-SEM/EDX spectra. Textural features were extracted from the FE-SEM/EDX image using a gray level co-occurrence matrix technique, while the FE-SEM/EDX spectra features were calculated based on peak heights and corrected area under the peaks using an algorithm. A discriminant analysis technique was employed to predict the cervical precancerous stage into three classes: normal, low-grade intraepithelial squamous lesion (LSIL), and high-grade intraepithelial squamous lesion (HSIL). The capability of the developed screening system was tested using 700 FE-SEM/EDX spectra (300 normal, 200 LSIL, and 200 HSIL cases). The accuracy, sensitivity, and specificity performances were 98.2%, 99.0%, and 98.0%, respectively.

  3. Machine Vision for Relative Spacecraft Navigation During Approach to Docking

    NASA Technical Reports Server (NTRS)

    Chien, Chiun-Hong; Baker, Kenneth

    2011-01-01

    This paper describes a machine vision system for relative spacecraft navigation during the terminal phase of approach to docking that: 1) matches high contrast image features of the target vehicle, as seen by a camera that is bore-sighted to the docking adapter on the chase vehicle, to the corresponding features in a 3d model of the docking adapter on the target vehicle and 2) is robust to on-orbit lighting. An implementation is provided for the case of the Space Shuttle Orbiter docking to the International Space Station (ISS) with quantitative test results using a full scale, medium fidelity mock-up of the ISS docking adapter mounted on a 6-DOF motion platform at the NASA Marshall Spaceflight Center Flight Robotics Laboratory and qualitative test results using recorded video from the Orbiter Docking System Camera (ODSC) during multiple orbiter to ISS docking missions. The Natural Feature Image Registration (NFIR) system consists of two modules: 1) Tracking which tracks the target object from image to image and estimates the position and orientation (pose) of the docking camera relative to the target object and 2) Acquisition which recognizes the target object if it is in the docking camera Field-of-View and provides an approximate pose that is used to initialize tracking. Detected image edges are matched to the 3d model edges whose predicted location, based on the pose estimate and its first time derivative from the previous frame, is closest to the detected edge1 . Mismatches are eliminated using a rigid motion constraint. The remaining 2d image to 3d model matches are used to make a least squares estimate of the change in relative pose from the previous image to the current image. The changes in position and in attitude are used as data for two Kalman filters whose outputs are smoothed estimate of position and velocity plus attitude and attitude rate that are then used to predict the location of the 3d model features in the next image.

  4. Modeling the shape and composition of the human body using dual energy X-ray absorptiometry images

    PubMed Central

    Shepherd, John A.; Fan, Bo; Schwartz, Ann V.; Cawthon, Peggy; Cummings, Steven R.; Kritchevsky, Stephen; Nevitt, Michael; Santanasto, Adam; Cootes, Timothy F.

    2017-01-01

    There is growing evidence that body shape and regional body composition are strong indicators of metabolic health. The purpose of this study was to develop statistical models that accurately describe holistic body shape, thickness, and leanness. We hypothesized that there are unique body shape features that are predictive of mortality beyond standard clinical measures. We developed algorithms to process whole-body dual-energy X-ray absorptiometry (DXA) scans into body thickness and leanness images. We performed statistical appearance modeling (SAM) and principal component analysis (PCA) to efficiently encode the variance of body shape, leanness, and thickness across sample of 400 older Americans from the Health ABC study. The sample included 200 cases and 200 controls based on 6-year mortality status, matched on sex, race and BMI. The final model contained 52 points outlining the torso, upper arms, thighs, and bony landmarks. Correlation analyses were performed on the PCA parameters to identify body shape features that vary across groups and with metabolic risk. Stepwise logistic regression was performed to identify sex and race, and predict mortality risk as a function of body shape parameters. These parameters are novel body composition features that uniquely identify body phenotypes of different groups and predict mortality risk. Three parameters from a SAM of body leanness and thickness accurately identified sex (training AUC = 0.99) and six accurately identified race (training AUC = 0.91) in the sample dataset. Three parameters from a SAM of only body thickness predicted mortality (training AUC = 0.66, validation AUC = 0.62). Further study is warranted to identify specific shape/composition features that predict other health outcomes. PMID:28423041

  5. Knowledge Driven Image Mining with Mixture Density Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Oza, Nikunj

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven image mining based on the theory of Mercer Kernels; which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. In that high dimensional feature space, linear clustering, prediction, and classification algorithms can be applied and the results can be mapped back down to the original image space. Thus, highly nonlinear structure in the image can be recovered through the use of well-known linear mathematics in the feature space. This process has a number of advantages over traditional methods in that it allows for nonlinear interactions to be modelled with only a marginal increase in computational costs. In this paper, we present the theory of Mercer Kernels, describe its use in image mining, discuss a new method to generate Mercer Kernels directly from data, and compare the results with existing algorithms on data from the MODIS (Moderate Resolution Spectral Radiometer) instrument taken over the Arctic region. We also discuss the potential application of these methods on the Intelligent Archive, a NASA initiative for developing a tagged image data warehouse for the Earth Sciences.

  6. Knowledge Driven Image Mining with Mixture Density Mercer Kernals

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Oza, Nikunj

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven image mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. In that high dimensional feature space, linear clustering, prediction, and classification algorithms can be applied and the results can be mapped back down to the original image space. Thus, highly nonlinear structure in the image can be recovered through the use of well-known linear mathematics in the feature space. This process has a number of advantages over traditional methods in that it allows for nonlinear interactions to be modelled with only a marginal increase in computational costs. In this paper we present the theory of Mercer Kernels; describe its use in image mining, discuss a new method to generate Mercer Kernels directly from data, and compare the results with existing algorithms on data from the MODIS (Moderate Resolution Spectral Radiometer) instrument taken over the Arctic region. We also discuss the potential application of these methods on the Intelligent Archive, a NASA initiative for developing a tagged image data warehouse for the Earth Sciences.

  7. Variable importance in nonlinear kernels (VINK): classification of digitized histopathology.

    PubMed

    Ginsburg, Shoshana; Ali, Sahirzeeshan; Lee, George; Basavanhally, Ajay; Madabhushi, Anant

    2013-01-01

    Quantitative histomorphometry is the process of modeling appearance of disease morphology on digitized histopathology images via image-based features (e.g., texture, graphs). Due to the curse of dimensionality, building classifiers with large numbers of features requires feature selection (which may require a large training set) or dimensionality reduction (DR). DR methods map the original high-dimensional features in terms of eigenvectors and eigenvalues, which limits the potential for feature transparency or interpretability. Although methods exist for variable selection and ranking on embeddings obtained via linear DR schemes (e.g., principal components analysis (PCA)), similar methods do not yet exist for nonlinear DR (NLDR) methods. In this work we present a simple yet elegant method for approximating the mapping between the data in the original feature space and the transformed data in the kernel PCA (KPCA) embedding space; this mapping provides the basis for quantification of variable importance in nonlinear kernels (VINK). We show how VINK can be implemented in conjunction with the popular Isomap and Laplacian eigenmap algorithms. VINK is evaluated in the contexts of three different problems in digital pathology: (1) predicting five year PSA failure following radical prostatectomy, (2) predicting Oncotype DX recurrence risk scores for ER+ breast cancers, and (3) distinguishing good and poor outcome p16+ oropharyngeal tumors. We demonstrate that subsets of features identified by VINK provide similar or better classification or regression performance compared to the original high dimensional feature sets.

  8. Automated retrieval of forest structure variables based on multi-scale texture analysis of VHR satellite imagery

    NASA Astrophysics Data System (ADS)

    Beguet, Benoit; Guyon, Dominique; Boukir, Samia; Chehata, Nesrine

    2014-10-01

    The main goal of this study is to design a method to describe the structure of forest stands from Very High Resolution satellite imagery, relying on some typical variables such as crown diameter, tree height, trunk diameter, tree density and tree spacing. The emphasis is placed on the automatization of the process of identification of the most relevant image features for the forest structure retrieval task, exploiting both spectral and spatial information. Our approach is based on linear regressions between the forest structure variables to be estimated and various spectral and Haralick's texture features. The main drawback of this well-known texture representation is the underlying parameters which are extremely difficult to set due to the spatial complexity of the forest structure. To tackle this major issue, an automated feature selection process is proposed which is based on statistical modeling, exploring a wide range of parameter values. It provides texture measures of diverse spatial parameters hence implicitly inducing a multi-scale texture analysis. A new feature selection technique, we called Random PRiF, is proposed. It relies on random sampling in feature space, carefully addresses the multicollinearity issue in multiple-linear regression while ensuring accurate prediction of forest variables. Our automated forest variable estimation scheme was tested on Quickbird and Pléiades panchromatic and multispectral images, acquired at different periods on the maritime pine stands of two sites in South-Western France. It outperforms two well-established variable subset selection techniques. It has been successfully applied to identify the best texture features in modeling the five considered forest structure variables. The RMSE of all predicted forest variables is improved by combining multispectral and panchromatic texture features, with various parameterizations, highlighting the potential of a multi-resolution approach for retrieving forest structure variables from VHR satellite images. Thus an average prediction error of ˜ 1.1 m is expected on crown diameter, ˜ 0.9 m on tree spacing, ˜ 3 m on height and ˜ 0.06 m on diameter at breast height.

  9. Computational ghost imaging using deep learning

    NASA Astrophysics Data System (ADS)

    Shimobaba, Tomoyoshi; Endo, Yutaka; Nishitsuji, Takashi; Takahashi, Takayuki; Nagahama, Yuki; Hasegawa, Satoki; Sano, Marie; Hirayama, Ryuji; Kakue, Takashi; Shiraki, Atsushi; Ito, Tomoyoshi

    2018-04-01

    Computational ghost imaging (CGI) is a single-pixel imaging technique that exploits the correlation between known random patterns and the measured intensity of light transmitted (or reflected) by an object. Although CGI can obtain two- or three-dimensional images with a single or a few bucket detectors, the quality of the reconstructed images is reduced by noise due to the reconstruction of images from random patterns. In this study, we improve the quality of CGI images using deep learning. A deep neural network is used to automatically learn the features of noise-contaminated CGI images. After training, the network is able to predict low-noise images from new noise-contaminated CGI images.

  10. Rapid and non-destructive identification of water-injected beef samples using multispectral imaging analysis.

    PubMed

    Liu, Jinxia; Cao, Yue; Wang, Qiu; Pan, Wenjuan; Ma, Fei; Liu, Changhong; Chen, Wei; Yang, Jianbo; Zheng, Lei

    2016-01-01

    Water-injected beef has aroused public concern as a major food-safety issue in meat products. In the study, the potential of multispectral imaging analysis in the visible and near-infrared (405-970 nm) regions was evaluated for identifying water-injected beef. A multispectral vision system was used to acquire images of beef injected with up to 21% content of water, and partial least squares regression (PLSR) algorithm was employed to establish prediction model, leading to quantitative estimations of actual water increase with a correlation coefficient (r) of 0.923. Subsequently, an optimized model was achieved by integrating spectral data with feature information extracted from ordinary RGB data, yielding better predictions (r = 0.946). Moreover, the prediction equation was transferred to each pixel within the images for visualizing the distribution of actual water increase. These results demonstrate the capability of multispectral imaging technology as a rapid and non-destructive tool for the identification of water-injected beef. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Postharvest monitoring of organic potato (cv. Anuschka) during hot-air drying using visible-NIR hyperspectral imaging.

    PubMed

    Moscetti, Roberto; Sturm, Barbara; Crichton, Stuart Oj; Amjad, Waseem; Massantini, Riccardo

    2018-05-01

    The potential of hyperspectral imaging (500-1010 nm) was evaluated for monitoring of the quality of potato slices (var. Anuschka) of 5, 7 and 9 mm thickness subjected to air drying at 50 °C. The study investigated three different feature selection methods for the prediction of dry basis moisture content and colour of potato slices using partial least squares regression (PLS). The feature selection strategies tested include interval PLS regression (iPLS), and differences and ratios between raw reflectance values for each possible pair of wavelengths (R[λ 1 ]-R[λ 2 ] and R[λ 1 ]:R[λ 2 ], respectively). Moreover, the combination of spectral and spatial domains was tested. Excellent results were obtained using the iPLS algorithm. However, features from both datasets of raw reflectance differences and ratios represent suitable alternatives for development of low-complex prediction models. Finally, the dry basis moisture content was high accurately predicted by combining spectral data (i.e. R[511 nm]-R[994 nm]) and spatial domain (i.e. relative area shrinkage of slice). Modelling the data acquired during drying through hyperspectral imaging can provide useful information concerning the chemical and physicochemical changes of the product. With all this information, the proposed approach lays the foundations for a more efficient smart dryer that can be designed and its process optimized for drying of potato slices. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  12. A hybrid deep learning approach to predict malignancy of breast lesions using mammograms

    NASA Astrophysics Data System (ADS)

    Wang, Yunzhi; Heidari, Morteza; Mirniaharikandehei, Seyedehnafiseh; Gong, Jing; Qian, Wei; Qiu, Yuchen; Zheng, Bin

    2018-03-01

    Applying deep learning technology to medical imaging informatics field has been recently attracting extensive research interest. However, the limited medical image dataset size often reduces performance and robustness of the deep learning based computer-aided detection and/or diagnosis (CAD) schemes. In attempt to address this technical challenge, this study aims to develop and evaluate a new hybrid deep learning based CAD approach to predict likelihood of a breast lesion detected on mammogram being malignant. In this approach, a deep Convolutional Neural Network (CNN) was firstly pre-trained using the ImageNet dataset and serve as a feature extractor. A pseudo-color Region of Interest (ROI) method was used to generate ROIs with RGB channels from the mammographic images as the input to the pre-trained deep network. The transferred CNN features from different layers of the CNN were then obtained and a linear support vector machine (SVM) was trained for the prediction task. By applying to a dataset involving 301 suspicious breast lesions and using a leave-one-case-out validation method, the areas under the ROC curves (AUC) = 0.762 and 0.792 using the traditional CAD scheme and the proposed deep learning based CAD scheme, respectively. An ensemble classifier that combines the classification scores generated by the two schemes yielded an improved AUC value of 0.813. The study results demonstrated feasibility and potentially improved performance of applying a new hybrid deep learning approach to develop CAD scheme using a relatively small dataset of medical images.

  13. Preliminary study of tumor heterogeneity in imaging predicts two year survival in pancreatic cancer patients.

    PubMed

    Chakraborty, Jayasree; Langdon-Embry, Liana; Cunanan, Kristen M; Escalon, Joanna G; Allen, Peter J; Lowery, Maeve A; O'Reilly, Eileen M; Gönen, Mithat; Do, Richard G; Simpson, Amber L

    2017-01-01

    Pancreatic ductal adenocarcinoma (PDAC) is one of the most lethal cancers in the United States with a five-year survival rate of 7.2% for all stages. Although surgical resection is the only curative treatment, currently we are unable to differentiate between resectable patients with occult metastatic disease from those with potentially curable disease. Identification of patients with poor prognosis via early classification would help in initial management including the use of neoadjuvant chemotherapy or radiation, or in the choice of postoperative adjuvant therapy. PDAC ranges in appearance from homogeneously isoattenuating masses to heterogeneously hypovascular tumors on CT images; hence, we hypothesize that heterogeneity reflects underlying differences at the histologic or genetic level and will therefore correlate with patient outcome. We quantify heterogeneity of PDAC with texture analysis to predict 2-year survival. Using fuzzy minimum-redundancy maximum-relevance feature selection and a naive Bayes classifier, the proposed features achieve an area under receiver operating characteristic curve (AUC) of 0.90 and accuracy (Ac) of 82.86% with the leave-one-image-out technique and an AUC of 0.80 and Ac of 75.0% with three-fold cross-validation. We conclude that texture analysis can be used to quantify heterogeneity in CT images to accurately predict 2-year survival in patients with pancreatic cancer. From these data, we infer differences in the biological evolution of pancreatic cancer subtypes measurable in imaging and identify opportunities for optimized patient selection for therapy.

  14. Association of mammographic image feature change and an increasing risk trend of developing breast cancer: an assessment

    NASA Astrophysics Data System (ADS)

    Tan, Maxine; Leader, Joseph K.; Liu, Hong; Zheng, Bin

    2015-03-01

    We recently investigated a new mammographic image feature based risk factor to predict near-term breast cancer risk after a woman has a negative mammographic screening. We hypothesized that unlike the conventional epidemiology-based long-term (or lifetime) risk factors, the mammographic image feature based risk factor value will increase as the time lag between the negative and positive mammography screening decreases. The purpose of this study is to test this hypothesis. From a large and diverse full-field digital mammography (FFDM) image database with 1278 cases, we collected all available sequential FFDM examinations for each case including the "current" and 1 to 3 most recently "prior" examinations. All "prior" examinations were interpreted negative, and "current" ones were either malignant or recalled negative/benign. We computed 92 global mammographic texture and density based features, and included three clinical risk factors (woman's age, family history and subjective breast density BIRADS ratings). On this initial feature set, we applied a fast and accurate Sequential Forward Floating Selection (SFFS) feature selection algorithm to reduce feature dimensionality. The features computed on both mammographic views were individually/ separately trained using two artificial neural network (ANN) classifiers. The classification scores of the two ANNs were then merged with a sequential ANN. The results show that the maximum adjusted odds ratios were 5.59, 7.98, and 15.77 for using the 3rd, 2nd, and 1st "prior" FFDM examinations, respectively, which demonstrates a higher association of mammographic image feature change and an increasing risk trend of developing breast cancer in the near-term after a negative screening.

  15. Probability-Based Recognition Framework for Underwater Landmarks Using Sonar Images †

    PubMed Central

    Choi, Jinwoo; Choi, Hyun-Taek

    2017-01-01

    This paper proposes a probability-based framework for recognizing underwater landmarks using sonar images. Current recognition methods use a single image, which does not provide reliable results because of weaknesses of the sonar image such as unstable acoustic source, many speckle noises, low resolution images, single channel image, and so on. However, using consecutive sonar images, if the status—i.e., the existence and identity (or name)—of an object is continuously evaluated by a stochastic method, the result of the recognition method is available for calculating the uncertainty, and it is more suitable for various applications. Our proposed framework consists of three steps: (1) candidate selection, (2) continuity evaluation, and (3) Bayesian feature estimation. Two probability methods—particle filtering and Bayesian feature estimation—are used to repeatedly estimate the continuity and feature of objects in consecutive images. Thus, the status of the object is repeatedly predicted and updated by a stochastic method. Furthermore, we develop an artificial landmark to increase detectability by an imaging sonar, which we apply to the characteristics of acoustic waves, such as instability and reflection depending on the roughness of the reflector surface. The proposed method is verified by conducting basin experiments, and the results are presented. PMID:28837068

  16. Brain injury patterns in hypoglycemia in neonatal encephalopathy.

    PubMed

    Wong, D S T; Poskitt, K J; Chau, V; Miller, S P; Roland, E; Hill, A; Tam, E W Y

    2013-07-01

    Low glucose values are often seen in term infants with NE, including HIE, yet the contribution of hypoglycemia to the pattern of neurologic injury remains unclear. We hypothesized that MR features of neonatal hypoglycemia could be detected, superimposed on the predominant HIE injury pattern. Term neonates (n = 179) with NE were prospectively imaged with day-3 MR studies and had glucose data available for review. The predominant imaging pattern of HIE was recorded as watershed, basal ganglia, total, focal-multifocal, or no injury. Radiologic hypoglycemia was diagnosed on the basis of selective edema in the posterior white matter, pulvinar, and anterior medial thalamic nuclei. Clinical charts were reviewed for evidence of NE, HIE, and hypoglycemia (<46 mg/dL). The predominant pattern of HIE injury imaged included 17 watershed, 25 basal ganglia, 10 total, 42 focal-multifocal, and 85 cases of no injury. A radiologic diagnosis of hypoglycemia was made in 34 cases. Compared with laboratory-confirmed hypoglycemia, MR findings had a positive predictive value of 82% and negative predictive value of 78%. Sixty (34%) neonates had clinical hypoglycemia before MR imaging. Adjusting for 5-minute Apgar scores and umbilical artery pH with logistic regression, clinical hypoglycemia was associated with a 17.6-fold higher odds of MR imaging identification (P < .001). Selective posterior white matter and pulvinar edema were most predictive of clinical hypoglycemia, and no injury (36%) or a watershed (32%) pattern of injury was seen more often in severe hypoglycemia. In term infants with NE and hypoglycemia, specific imaging features for both hypoglycemia and hypoxia-ischemia can be identified.

  17. Techniques for identifying dust devils in mars pathfinder images

    USGS Publications Warehouse

    Metzger, S.M.; Carr, J.R.; Johnson, J. R.; Parker, T.J.; Lemmon, M.T.

    2000-01-01

    Image processing methods used to identify and enhance dust devil features imaged by IMP (Imager for Mars Pathfinder) are reviewed. Spectral differences, visible red minus visible blue, were used for initial dust devil searches, driven by the observation that Martian dust has high red and low blue reflectance. The Martian sky proved to be more heavily dust-laden than pre-Pathfinder predictions, based on analysis of images from the Hubble Space Telescope. As a result, these initial spectral difference methods failed to contrast dust devils with background dust haze. Imager artifacts (dust motes on the camera lens, flat-field effects caused by imperfections in the CCD, and projection onto a flat sensor plane by a convex lens) further impeded the ability to resolve subtle dust devil features. Consequently, reference images containing sky with a minimal horizon were first subtracted from each spectral filter image to remove camera artifacts and reduce the background dust haze signal. Once the sky-flat preprocessing step was completed, the red-minus-blue spectral difference scheme was attempted again. Dust devils then were successfully identified as bright plumes. False-color ratios using calibrated IMP images were found useful for visualizing dust plumes, verifying initial discoveries as vortex-like features. Enhancement of monochromatic (especially blue filter) images revealed dust devils as silhouettes against brighter background sky. Experiments with principal components transformation identified dust devils in raw, uncalibrated IMP images and further showed relative movement of dust devils across the Martian surface. A variety of methods therefore served qualitative and quantitative goals for dust plume identification and analysis in an environment where such features are obscure.

  18. Relative location prediction in CT scan images using convolutional neural networks.

    PubMed

    Guo, Jiajia; Du, Hongwei; Zhu, Jianyue; Yan, Ting; Qiu, Bensheng

    2018-07-01

    Relative location prediction in computed tomography (CT) scan images is a challenging problem. Many traditional machine learning methods have been applied in attempts to alleviate this problem. However, the accuracy and speed of these methods cannot meet the requirement of medical scenario. In this paper, we propose a regression model based on one-dimensional convolutional neural networks (CNN) to determine the relative location of a CT scan image both quickly and precisely. In contrast to other common CNN models that use a two-dimensional image as an input, the input of this CNN model is a feature vector extracted by a shape context algorithm with spatial correlation. Normalization via z-score is first applied as a pre-processing step. Then, in order to prevent overfitting and improve model's performance, 20% of the elements of the feature vectors are randomly set to zero. This CNN model consists primarily of three one-dimensional convolutional layers, three dropout layers and two fully-connected layers with appropriate loss functions. A public dataset is employed to validate the performance of the proposed model using a 5-fold cross validation. Experimental results demonstrate an excellent performance of the proposed model when compared with contemporary techniques, achieving a median absolute error of 1.04 cm and mean absolute error of 1.69 cm. The time taken for each relative location prediction is approximately 2 ms. Results indicate that the proposed CNN method can contribute to a quick and accurate relative location prediction in CT scan images, which can improve efficiency of the medical picture archiving and communication system in the future. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Supervised learning technique for the automated identification of white matter hyperintensities in traumatic brain injury.

    PubMed

    Stone, James R; Wilde, Elisabeth A; Taylor, Brian A; Tate, David F; Levin, Harvey; Bigler, Erin D; Scheibel, Randall S; Newsome, Mary R; Mayer, Andrew R; Abildskov, Tracy; Black, Garrett M; Lennon, Michael J; York, Gerald E; Agarwal, Rajan; DeVillasante, Jorge; Ritter, John L; Walker, Peter B; Ahlers, Stephen T; Tustison, Nicholas J

    2016-01-01

    White matter hyperintensities (WMHs) are foci of abnormal signal intensity in white matter regions seen with magnetic resonance imaging (MRI). WMHs are associated with normal ageing and have shown prognostic value in neurological conditions such as traumatic brain injury (TBI). The impracticality of manually quantifying these lesions limits their clinical utility and motivates the utilization of machine learning techniques for automated segmentation workflows. This study develops a concatenated random forest framework with image features for segmenting WMHs in a TBI cohort. The framework is built upon the Advanced Normalization Tools (ANTs) and ANTsR toolkits. MR (3D FLAIR, T2- and T1-weighted) images from 24 service members and veterans scanned in the Chronic Effects of Neurotrauma Consortium's (CENC) observational study were acquired. Manual annotations were employed for both training and evaluation using a leave-one-out strategy. Performance measures include sensitivity, positive predictive value, [Formula: see text] score and relative volume difference. Final average results were: sensitivity = 0.68 ± 0.38, positive predictive value = 0.51 ± 0.40, [Formula: see text] = 0.52 ± 0.36, relative volume difference = 43 ± 26%. In addition, three lesion size ranges are selected to illustrate the variation in performance with lesion size. Paired with correlative outcome data, supervised learning methods may allow for identification of imaging features predictive of diagnosis and prognosis in individual TBI patients.

  20. Predicting epidermal growth factor receptor gene amplification status in glioblastoma multiforme by quantitative enhancement and necrosis features deriving from conventional magnetic resonance imaging.

    PubMed

    Dong, Fei; Zeng, Qiang; Jiang, Biao; Yu, Xinfeng; Wang, Weiwei; Xu, Jingjing; Yu, Jinna; Li, Qian; Zhang, Minming

    2018-05-01

    To study whether some of the quantitative enhancement and necrosis features in preoperative conventional MRI (cMRI) had a predictive value for epidermal growth factor receptor (EGFR) gene amplification status in glioblastoma multiforme (GBM).Fifty-five patients with pathologically determined GBMs who underwent cMRI were retrospectively reviewed. The following cMRI features were quantitatively measured and recorded: long and short diameters of the enhanced portion (LDE and SDE), maximum and minimum thickness of the enhanced portion (MaxTE and MinTE), and long and short diameters of the necrotic portion (LDN and SDN). Univariate analysis of each feature and a decision tree model fed with all the features were performed. Area under the receiver operating characteristic (ROC) curve (AUC) was used to assess the performance of features, and predictive accuracy was used to assess the performance of the model.For single feature, MinTE showed the best performance in differentiating EGFR gene amplification negative (wild-type) (nEGFR) GBM from EGFR gene amplification positive (pEGFR) GBM, and it got an AUC of 0.68 with a cut-off value of 2.6 mm. The decision tree model included 2 features MinTE and SDN, and got an accuracy of 0.83 in validation dataset.Our results suggest that quantitative measurement of the features MinTE and SDN in preoperative cMRI had a high accuracy for predicting EGFR gene amplification status in GBM.

  1. Applying a CAD-generated imaging marker to assess short-term breast cancer risk

    NASA Astrophysics Data System (ADS)

    Mirniaharikandehei, Seyedehnafiseh; Zarafshani, Ali; Heidari, Morteza; Wang, Yunzhi; Aghaei, Faranak; Zheng, Bin

    2018-02-01

    Although whether using computer-aided detection (CAD) helps improve radiologists' performance in reading and interpreting mammograms is controversy due to higher false-positive detection rates, objective of this study is to investigate and test a new hypothesis that CAD-generated false-positives, in particular, the bilateral summation of false-positives, is a potential imaging marker associated with short-term breast cancer risk. An image dataset involving negative screening mammograms acquired from 1,044 women was retrospectively assembled. Each case involves 4 images of craniocaudal (CC) and mediolateral oblique (MLO) view of the left and right breasts. In the next subsequent mammography screening, 402 cases were positive for cancer detected and 642 remained negative. A CAD scheme was applied to process all "prior" negative mammograms. Some features from CAD scheme were extracted, which include detection seeds, the total number of false-positive regions, an average of detection scores and the sum of detection scores in CC and MLO view images. Then the features computed from two bilateral images of left and right breasts from either CC or MLO view were combined. In order to predict the likelihood of each testing case being positive in the next subsequent screening, two logistic regression models were trained and tested using a leave-one-case-out based cross-validation method. Data analysis demonstrated the maximum prediction accuracy with an area under a ROC curve of AUC=0.65+/-0.017 and the maximum adjusted odds ratio of 4.49 with a 95% confidence interval of [2.95, 6.83]. The results also illustrated an increasing trend in the adjusted odds ratio and risk prediction scores (p<0.01). Thus, the study showed that CAD-generated false-positives might provide a new quantitative imaging marker to help assess short-term breast cancer risk.

  2. An evaluation of consensus techniques for diagnostic interpretation

    NASA Astrophysics Data System (ADS)

    Sauter, Jake N.; LaBarre, Victoria M.; Furst, Jacob D.; Raicu, Daniela S.

    2018-02-01

    Learning diagnostic labels from image content has been the standard in computer-aided diagnosis. Most computer-aided diagnosis systems use low-level image features extracted directly from image content to train and test machine learning classifiers for diagnostic label prediction. When the ground truth for the diagnostic labels is not available, reference truth is generated from the experts diagnostic interpretations of the image/region of interest. More specifically, when the label is uncertain, e.g. when multiple experts label an image and their interpretations are different, techniques to handle the label variability are necessary. In this paper, we compare three consensus techniques that are typically used to encode the variability in the experts labeling of the medical data: mean, median and mode, and their effects on simple classifiers that can handle deterministic labels (decision trees) and probabilistic vectors of labels (belief decision trees). Given that the NIH/NCI Lung Image Database Consortium (LIDC) data provides interpretations for lung nodules by up to four radiologists, we leverage the LIDC data to evaluate and compare these consensus approaches when creating computer-aided diagnosis systems for lung nodules. First, low-level image features of nodules are extracted and paired with their radiologists semantic ratings (1= most likely benign, , 5 = most likely malignant); second, machine learning multi-class classifiers that handle deterministic labels (decision trees) and probabilistic vectors of labels (belief decision trees) are built to predict the lung nodules semantic ratings. We show that the mean-based consensus generates the most robust classi- fier overall when compared to the median- and mode-based consensus. Lastly, the results of this study show that, when building CAD systems with uncertain diagnostic interpretation, it is important to evaluate different strategies for encoding and predicting the diagnostic label.

  3. On the probability density function and characteristic function moments of image steganalysis in the log prediction error wavelet subband

    NASA Astrophysics Data System (ADS)

    Bao, Zhenkun; Li, Xiaolong; Luo, Xiangyang

    2017-01-01

    Extracting informative statistic features is the most essential technical issue of steganalysis. Among various steganalysis methods, probability density function (PDF) and characteristic function (CF) moments are two important types of features due to the excellent ability for distinguishing the cover images from the stego ones. The two types of features are quite similar in definition. The only difference is that the PDF moments are computed in the spatial domain, while the CF moments are computed in the Fourier-transformed domain. Then, the comparison between PDF and CF moments is an interesting question of steganalysis. Several theoretical results have been derived, and CF moments are proved better than PDF moments in some cases. However, in the log prediction error wavelet subband of wavelet decomposition, some experiments show that the result is opposite and lacks a rigorous explanation. To solve this problem, a comparison result based on the rigorous proof is presented: the first-order PDF moment is proved better than the CF moment, while the second-order CF moment is better than the PDF moment. It tries to open the theoretical discussion on steganalysis and the question of finding suitable statistical features.

  4. Feature Selection, Flaring Size and Time-to-Flare Prediction Using Support Vector Regression, and Automated Prediction of Flaring Behavior Based on Spatio-Temporal Measures Using Hidden Markov Models

    NASA Astrophysics Data System (ADS)

    Al-Ghraibah, Amani

    Solar flares release stored magnetic energy in the form of radiation and can have significant detrimental effects on earth including damage to technological infrastructure. Recent work has considered methods to predict future flare activity on the basis of quantitative measures of the solar magnetic field. Accurate advanced warning of solar flare occurrence is an area of increasing concern and much research is ongoing in this area. Our previous work 111] utilized standard pattern recognition and classification techniques to determine (classify) whether a region is expected to flare within a predictive time window, using a Relevance Vector Machine (RVM) classification method. We extracted 38 features which describing the complexity of the photospheric magnetic field, the result classification metrics will provide the baseline against which we compare our new work. We find a true positive rate (TPR) of 0.8, true negative rate (TNR) of 0.7, and true skill score (TSS) of 0.49. This dissertation proposes three basic topics; the first topic is an extension to our previous work [111, where we consider a feature selection method to determine an appropriate feature subset with cross validation classification based on a histogram analysis of selected features. Classification using the top five features resulting from this analysis yield better classification accuracies across a large unbalanced dataset. In particular, the feature subsets provide better discrimination of the many regions that flare where we find a TPR of 0.85, a TNR of 0.65 sightly lower than our previous work, and a TSS of 0.5 which has an improvement comparing with our previous work. In the second topic, we study the prediction of solar flare size and time-to-flare using support vector regression (SVR). When we consider flaring regions only, we find an average error in estimating flare size of approximately half a GOES class. When we additionally consider non-flaring regions, we find an increased average error of approximately 3/4 a GOES class. We also consider thresholding the regressed flare size for the experiment containing both flaring and non-flaring regions and find a TPR. of 0.69 and a TNR of 0.86 for flare prediction, consistent with our previous studies of flare prediction using the same magnetic complexity features. The results for both of these size regression experiments are consistent across a wide range of predictive time windows, indicating that the magnetic complexity features may be persistent in appearance long before flare activity. This conjecture is supported by our larger error rates of some 40 hours in the time-to-flare regression problem. The magnetic complexity features considered here appear to have discriminative potential for flare size, but their persistence in time makes them less discriminative for the time-to-flare problem. We also study the prediction of solar flare size and time-to-flare using two temporal features, namely the ▵- and ▵-▵-features, the same average size and time-to-flare regression error are found when these temporal features are used in size and time-to-flare prediction. In the third topic, we study the temporal evolution of active region magnetic fields using Hidden Markov Models (HMMs) which is one of the efficient temporal analyses found in literature. We extracted 38 features which describing the complexity of the photospheric magnetic field. These features are converted into a sequence of symbols using k-nearest neighbor search method. We study many parameters before prediction; like the length of the training window Wtrain which denotes to the number of history images use to train the flare and non-flare HMMs, and number of hidden states Q. In training phase, the model parameters of the HMM of each category are optimized so as to best describe the training symbol sequences. In testing phase, we use the best flare and non-flare models to predict/classify active regions as a flaring or non-flaring region using a sliding window method. The best prediction result is found where the length of the history training images are 15 images (i.e., Wtrain= 15) and the length of the sliding testing window is less than or equal to W train, the best result give a TPR of 0.79 consistent with previous flare prediction work, TNR of 0.87 arid TSS of 0.66, where both are higher than our previous flare prediction work. We find that the best number of hidden states which can describe the temporal evolution of the solar ARs is equal to five states, at the same time, a close resultant metrics are found using different number of states.

  5. Ring-enhancement pattern on contrast-enhanced CT predicts adenosquamous carcinoma of the pancreas: a matched case-control study.

    PubMed

    Imaoka, Hiroshi; Shimizu, Yasuhiro; Mizuno, Nobumasa; Hara, Kazuo; Hijioka, Susumu; Tajika, Masahiro; Tanaka, Tsutomu; Ishihara, Makoto; Ogura, Takeshi; Obayashi, Tomohiko; Shinagawa, Akihide; Sakaguchi, Masafumi; Yamaura, Hidekazu; Kato, Mina; Niwa, Yasumasa; Yamao, Kenji

    2014-01-01

    Adenosquamous carcinoma of the pancreas (ASC) is a rare malignant neoplasm of the pancreas, exhibiting both glandular and squamous differentiation. However, little is known about its imaging features. This study examined the imaging features of pancreatic ASC. We evaluated images of contrast-enhanced computed tomography (CT) and endoscopic ultrasonography (EUS). As controls, solid pancreatic neoplasms matched in a 2:1 ratio to ASC cases for age, sex and tumor location were also evaluated. Twenty-three ASC cases were examined, and 46 solid pancreatic neoplasms (43 pancreatic ductal adenocarcinomas, two pancreatic neuroendocrine tumors and one acinar cell carcinoma) were matched as controls. Univariate analysis demonstrated significant differences in the outline and vascularity of tumors on contrast-enhanced CT in the ASC and control groups (P < 0.001 and P < 0.001, respectively). A smooth outline, cystic changes, and the ring-enhancement pattern on contrast-enhanced CT were seen to have significant predictive powers by stepwise forward logistic regression analysis (P = 0.044, P = 0.010, and P = 0.001, respectively). Of the three, the ring-enhancement pattern was the most useful, and its predictive diagnostic sensitivity, specificity, positive predictive value and negative predictive value for diagnosis of ASC were 65.2%, 89.6%, 75.0% and 84.3%, respectively. These results demonstrate that presence of the ring-enhancement pattern on contrast-enhanced CT is the most useful predictive factor for ASC. Copyright © 2014 IAP and EPC. Published by Elsevier B.V. All rights reserved.

  6. Multi-Level and Multi-Scale Feature Aggregation Using Pretrained Convolutional Neural Networks for Music Auto-Tagging

    NASA Astrophysics Data System (ADS)

    Lee, Jongpil; Nam, Juhan

    2017-08-01

    Music auto-tagging is often handled in a similar manner to image classification by regarding the 2D audio spectrogram as image data. However, music auto-tagging is distinguished from image classification in that the tags are highly diverse and have different levels of abstractions. Considering this issue, we propose a convolutional neural networks (CNN)-based architecture that embraces multi-level and multi-scaled features. The architecture is trained in three steps. First, we conduct supervised feature learning to capture local audio features using a set of CNNs with different input sizes. Second, we extract audio features from each layer of the pre-trained convolutional networks separately and aggregate them altogether given a long audio clip. Finally, we put them into fully-connected networks and make final predictions of the tags. Our experiments show that using the combination of multi-level and multi-scale features is highly effective in music auto-tagging and the proposed method outperforms previous state-of-the-arts on the MagnaTagATune dataset and the Million Song Dataset. We further show that the proposed architecture is useful in transfer learning.

  7. Radiomics biomarkers for accurate tumor progression prediction of oropharyngeal cancer

    NASA Astrophysics Data System (ADS)

    Hadjiiski, Lubomir; Chan, Heang-Ping; Cha, Kenny H.; Srinivasan, Ashok; Wei, Jun; Zhou, Chuan; Prince, Mark; Papagerakis, Silvana

    2017-03-01

    Accurate tumor progression prediction for oropharyngeal cancers is crucial for identifying patients who would best be treated with optimized treatment and therefore minimize the risk of under- or over-treatment. An objective decision support system that can merge the available radiomics, histopathologic and molecular biomarkers in a predictive model based on statistical outcomes of previous cases and machine learning may assist clinicians in making more accurate assessment of oropharyngeal tumor progression. In this study, we evaluated the feasibility of developing individual and combined predictive models based on quantitative image analysis from radiomics, histopathology and molecular biomarkers for oropharyngeal tumor progression prediction. With IRB approval, 31, 84, and 127 patients with head and neck CT (CT-HN), tumor tissue microarrays (TMAs) and molecular biomarker expressions, respectively, were collected. For 8 of the patients all 3 types of biomarkers were available and they were sequestered in a test set. The CT-HN lesions were automatically segmented using our level sets based method. Morphological, texture and molecular based features were extracted from CT-HN and TMA images, and selected features were merged by a neural network. The classification accuracy was quantified using the area under the ROC curve (AUC). Test AUCs of 0.87, 0.74, and 0.71 were obtained with the individual predictive models based on radiomics, histopathologic, and molecular features, respectively. Combining the radiomics and molecular models increased the test AUC to 0.90. Combining all 3 models increased the test AUC further to 0.94. This preliminary study demonstrates that the individual domains of biomarkers are useful and the integrated multi-domain approach is most promising for tumor progression prediction.

  8. SU-E-J-261: The Importance of Appropriate Image Preprocessing to Augment the Information of Radiomics Image Features

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, L; Fried, D; Fave, X

    Purpose: To investigate how different image preprocessing techniques, their parameters, and the different boundary handling techniques can augment the information of features and improve feature’s differentiating capability. Methods: Twenty-seven NSCLC patients with a solid tumor volume and no visually obvious necrotic regions in the simulation CT images were identified. Fourteen of these patients had a necrotic region visible in their pre-treatment PET images (necrosis group), and thirteen had no visible necrotic region in the pre-treatment PET images (non-necrosis group). We investigated how image preprocessing can impact the ability of radiomics image features extracted from the CT to differentiate between twomore » groups. It is expected the histogram in the necrosis group is more negatively skewed, and the uniformity from the necrosis group is less. Therefore, we analyzed two first order features, skewness and uniformity, on the image inside the GTV in the intensity range [−20HU, 180HU] under the combination of several image preprocessing techniques: (1) applying the isotropic Gaussian or anisotropic diffusion smoothing filter with a range of parameter(Gaussian smoothing: size=11, sigma=0:0.1:2.3; anisotropic smoothing: iteration=4, kappa=0:10:110); (2) applying the boundaryadapted Laplacian filter; and (3) applying the adaptive upper threshold for the intensity range. A 2-tailed T-test was used to evaluate the differentiating capability of CT features on pre-treatment PT necrosis. Result: Without any preprocessing, no differences in either skewness or uniformity were observed between two groups. After applying appropriate Gaussian filters (sigma>=1.3) or anisotropic filters(kappa >=60) with the adaptive upper threshold, skewness was significantly more negative in the necrosis group(p<0.05). By applying the boundary-adapted Laplacian filtering after the appropriate Gaussian filters (0.5 <=sigma<=1.1) or anisotropic filters(20<=kappa <=50), the uniformity was significantly lower in the necrosis group (p<0.05). Conclusion: Appropriate selection of image preprocessing techniques allows radiomics features to extract more useful information and thereby improve prediction models based on these features.« less

  9. "Radio-oncomics" : The potential of radiomics in radiation oncology.

    PubMed

    Peeken, Jan Caspar; Nüsslin, Fridtjof; Combs, Stephanie E

    2017-10-01

    Radiomics, a recently introduced concept, describes quantitative computerized algorithm-based feature extraction from imaging data including computer tomography (CT), magnetic resonance imaging (MRT), or positron-emission tomography (PET) images. For radiation oncology it offers the potential to significantly influence clinical decision-making and thus therapy planning and follow-up workflow. After image acquisition, image preprocessing, and defining regions of interest by structure segmentation, algorithms are applied to calculate shape, intensity, texture, and multiscale filter features. By combining multiple features and correlating them with clinical outcome, prognostic models can be created. Retrospective studies have proposed radiomics classifiers predicting, e. g., overall survival, radiation treatment response, distant metastases, or radiation-related toxicity. Besides, radiomics features can be correlated with genomic information ("radiogenomics") and could be used for tumor characterization. Distinct patterns based on data-based as well as genomics-based features will influence radiation oncology in the future. Individualized treatments in terms of dose level adaption and target volume definition, as well as other outcome-related parameters will depend on radiomics and radiogenomics. By integration of various datasets, the prognostic power can be increased making radiomics a valuable part of future precision medicine approaches. This perspective demonstrates the evidence for the radiomics concept in radiation oncology. The necessity of further studies to integrate radiomics classifiers into clinical decision-making and the radiation therapy workflow is emphasized.

  10. Image processing and machine learning for fully automated probabilistic evaluation of medical images.

    PubMed

    Sajn, Luka; Kukar, Matjaž

    2011-12-01

    The paper presents results of our long-term study on using image processing and data mining methods in a medical imaging. Since evaluation of modern medical images is becoming increasingly complex, advanced analytical and decision support tools are involved in integration of partial diagnostic results. Such partial results, frequently obtained from tests with substantial imperfections, are integrated into ultimate diagnostic conclusion about the probability of disease for a given patient. We study various topics such as improving the predictive power of clinical tests by utilizing pre-test and post-test probabilities, texture representation, multi-resolution feature extraction, feature construction and data mining algorithms that significantly outperform medical practice. Our long-term study reveals three significant milestones. The first improvement was achieved by significantly increasing post-test diagnostic probabilities with respect to expert physicians. The second, even more significant improvement utilizes multi-resolution image parametrization. Machine learning methods in conjunction with the feature subset selection on these parameters significantly improve diagnostic performance. However, further feature construction with the principle component analysis on these features elevates results to an even higher accuracy level that represents the third milestone. With the proposed approach clinical results are significantly improved throughout the study. The most significant result of our study is improvement in the diagnostic power of the whole diagnostic process. Our compound approach aids, but does not replace, the physician's judgment and may assist in decisions on cost effectiveness of tests. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  11. Learning Scene Categories from High Resolution Satellite Image for Aerial Video Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheriyadat, Anil M

    2011-01-01

    Automatic scene categorization can benefit various aerial video processing applications. This paper addresses the problem of predicting the scene category from aerial video frames using a prior model learned from satellite imagery. We show that local and global features in the form of line statistics and 2-D power spectrum parameters respectively can characterize the aerial scene well. The line feature statistics and spatial frequency parameters are useful cues to distinguish between different urban scene categories. We learn the scene prediction model from highresolution satellite imagery to test the model on the Columbus Surrogate Unmanned Aerial Vehicle (CSUAV) dataset ollected bymore » high-altitude wide area UAV sensor platform. e compare the proposed features with the popular Scale nvariant Feature Transform (SIFT) features. Our experimental results show that proposed approach outperforms te SIFT model when the training and testing are conducted n disparate data sources.« less

  12. Atorvastatin effect evaluation based on feature combination of three-dimension ultrasound images

    NASA Astrophysics Data System (ADS)

    Luo, Yongkang; Ding, Mingyue

    2016-03-01

    In the past decades, stroke has become the worldwide common cause of death and disability. It is well known that ischemic stroke is mainly caused by carotid atherosclerosis. As an inexpensive, convenient and fast means of detection, ultrasound technology is applied widely in the prevention and treatment of carotid atherosclerosis. Recently, many studies have focused on how to quantitatively evaluate local arterial effects of medicine treatment for carotid diseases. So the evaluation method based on feature combination was proposed to detect potential changes in the carotid arteries after atorvastatin treatment. And the support vector machine (SVM) and 10-fold cross-validation protocol were utilized on a database of 5533 carotid ultrasound images of 38 patients (17 atorvastatin groups and 21 placebo groups) at baseline and after 3 months of the treatment. With combination optimization of many features (including morphological and texture features), the evaluation results of single feature and different combined features were compared. The experimental results showed that the performance of single feature is poor and the best feature combination have good recognition ability, with the accuracy 92.81%, sensitivity 80.95%, specificity 95.52%, positive predictive value 80.47%, negative predictive value 95.65%, Matthew's correlation coefficient 76.27%, and Youden's index 76.48%. And the receiver operating characteristic (ROC) curve was also performed well with 0.9663 of the area under the ROC curve (AUC), which is better than all the features with 0.9423 of the AUC. Thus, it is proved that this novel method can reliably and accurately evaluate the effect of atorvastatin treatment.

  13. A scene-analysis approach to remote sensing. [San Francisco, California

    NASA Technical Reports Server (NTRS)

    Tenenbaum, J. M. (Principal Investigator); Fischler, M. A.; Wolf, H. C.

    1978-01-01

    The author has identified the following significant results. Geometric correspondance between a sensed image and a symbolic map is established in an initial stage of processing by adjusting parameters of a sensed model so that the image features predicted from the map optimally match corresponding features extracted from the sensed image. Information in the map is then used to constrain where to look in an image, what to look for, and how to interpret what is seen. For simple monitoring tasks involving multispectral classification, these constraints significantly reduce computation, simplify interpretation, and improve the utility of the resulting information. Previously intractable tasks requiring spatial and textural analysis may become straightforward in the context established by the map knowledge. The use of map-guided image analysis in monitoring the volume of water in a reservoir, the number of boxcars in a railyard, and the number of ships in a harbor is demonstrated.

  14. Design of a multi-spectral imager built using the compressive sensing single-pixel camera architecture

    NASA Astrophysics Data System (ADS)

    McMackin, Lenore; Herman, Matthew A.; Weston, Tyler

    2016-02-01

    We present the design of a multi-spectral imager built using the architecture of the single-pixel camera. The architecture is enabled by the novel sampling theory of compressive sensing implemented optically using the Texas Instruments DLP™ micro-mirror array. The array not only implements spatial modulation necessary for compressive imaging but also provides unique diffractive spectral features that result in a multi-spectral, high-spatial resolution imager design. The new camera design provides multi-spectral imagery in a wavelength range that extends from the visible to the shortwave infrared without reduction in spatial resolution. In addition to the compressive imaging spectrometer design, we present a diffractive model of the architecture that allows us to predict a variety of detailed functional spatial and spectral design features. We present modeling results, architectural design and experimental results that prove the concept.

  15. Application of local binary pattern and human visual Fibonacci texture features for classification different medical images

    NASA Astrophysics Data System (ADS)

    Sanghavi, Foram; Agaian, Sos

    2017-05-01

    The goal of this paper is to (a) test the nuclei based Computer Aided Cancer Detection system using Human Visual based system on the histopathology images and (b) Compare the results of the proposed system with the Local Binary Pattern and modified Fibonacci -p pattern systems. The system performance is evaluated using different parameters such as accuracy, specificity, sensitivity, positive predictive value, and negative predictive value on 251 prostate histopathology images. The accuracy of 96.69% was observed for cancer detection using the proposed human visual based system compared to 87.42% and 94.70% observed for Local Binary patterns and the modified Fibonacci p patterns.

  16. Automated assessment of imaging biomarkers for the PanCan lung cancer risk prediction model with validation on NLST data

    NASA Astrophysics Data System (ADS)

    Wiemker, Rafael; Sevenster, Merlijn; MacMahon, Heber; Li, Feng; Dalal, Sandeep; Tahmasebi, Amir; Klinder, Tobias

    2017-03-01

    The imaging biomarkers EmphysemaPresence and NoduleSpiculation are crucial inputs for most models aiming to predict the risk of indeterminate pulmonary nodules detected at CT screening. To increase reproducibility and to accelerate screening workflow it is desirable to assess these biomarkers automatically. Validation on NLST images indicates that standard histogram measures are not sufficient to assess EmphysemaPresence in screenees. However, automatic scoring of bulla-resembling low attenuation areas can achieve agreement with experts with close to 80% sensitivity and specificity. NoduleSpiculation can be automatically assessed with similar accuracy. We find a dedicated spiculi tracing score to slightly outperform generic combinations of texture features with classifiers.

  17. Hierarchical Neural Representation of Dreamed Objects Revealed by Brain Decoding with Deep Neural Network Features.

    PubMed

    Horikawa, Tomoyasu; Kamitani, Yukiyasu

    2017-01-01

    Dreaming is generally thought to be generated by spontaneous brain activity during sleep with patterns common to waking experience. This view is supported by a recent study demonstrating that dreamed objects can be predicted from brain activity during sleep using statistical decoders trained with stimulus-induced brain activity. However, it remains unclear whether and how visual image features associated with dreamed objects are represented in the brain. In this study, we used a deep neural network (DNN) model for object recognition as a proxy for hierarchical visual feature representation, and DNN features for dreamed objects were analyzed with brain decoding of fMRI data collected during dreaming. The decoders were first trained with stimulus-induced brain activity labeled with the feature values of the stimulus image from multiple DNN layers. The decoders were then used to decode DNN features from the dream fMRI data, and the decoded features were compared with the averaged features of each object category calculated from a large-scale image database. We found that the feature values decoded from the dream fMRI data positively correlated with those associated with dreamed object categories at mid- to high-level DNN layers. Using the decoded features, the dreamed object category could be identified at above-chance levels by matching them to the averaged features for candidate categories. The results suggest that dreaming recruits hierarchical visual feature representations associated with objects, which may support phenomenal aspects of dream experience.

  18. TU-CD-BRB-10: 18F-FDG PET Image-Derived Tumor Features Highlight Altered Pathways Identified by Trancriptomic Analysis in Head and Neck Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tixier, F; INSERM UMR1101 LaTIM, Brest; Cheze-Le-Rest, C

    2015-06-15

    Purpose: Several quantitative features can be extracted from 18F-FDG PET images, such as standardized uptake values (SUVs), metabolic tumor volume (MTV), shape characterization (SC) or intra-tumor radiotracer heterogeneity quantification (HQ). Some of these features calculated from baseline 18F-FDG PET images have shown a prognostic and predictive clinical value. It has been hypothesized that these features highlight underlying tumor patho-physiological processes at smaller scales. The objective of this study was to investigate the ability of recovering alterations of signaling pathways from FDG PET image-derived features. Methods: 52 patients were prospectively recruited from two medical centers (Brest and Poitiers). All patients underwentmore » an FDG PET scan for staging and biopsies of both healthy and primary tumor tissues. Biopsies went through a transcriptomic analysis performed in four spates on 4×44k chips (Agilent™). Primary tumors were delineated in the PET images using the Fuzzy Locally Adaptive Bayesian algorithm and characterized using 10 features including SUVs, SC and HQ. A module network algorithm followed by functional annotation was exploited in order to link PET features with signaling pathways alterations. Results: Several PET-derived features were found to discriminate differentially expressed genes between tumor and healthy tissue (fold-change >2, p<0.01) into 30 co-regulated groups (p<0.05). Functional annotations applied to these groups of genes highlighted associations with well-known pathways involved in cancer processes, such as cell proliferation and apoptosis, as well as with more specific ones such as unsaturated fatty acids. Conclusion: Quantitative features extracted from baseline 18F-FDG PET images usually exploited only for diagnosis and staging, were identified in this work as being related to specific altered pathways and may show promise as tools for personalizing treatment decisions.« less

  19. Association between background parenchymal enhancement of breast MRI and BIRADS rating change in the subsequent screening

    NASA Astrophysics Data System (ADS)

    Aghaei, Faranak; Mirniaharikandehei, Seyedehnafiseh; Hollingsworth, Alan B.; Stoug, Rebecca G.; Pearce, Melanie; Liu, Hong; Zheng, Bin

    2018-03-01

    Although breast magnetic resonance imaging (MRI) has been used as a breast cancer screening modality for high-risk women, its cancer detection yield remains low (i.e., <= 3%). Thus, increasing breast MRI screening efficacy and cancer detection yield is an important clinical issue in breast cancer screening. In this study, we investigated association between the background parenchymal enhancement (BPE) of breast MRI and the change of diagnostic (BIRADS) status in the next subsequent breast MRI screening. A dataset with 65 breast MRI screening cases was retrospectively assembled. All cases were rated BIRADS-2 (benign findings). In the subsequent screening, 4 cases were malignant (BIRADS-6), 48 remained BIRADS-2 and 13 were downgraded to negative (BIRADS-1). A computer-aided detection scheme was applied to process images of the first set of breast MRI screening. Total of 33 features were computed including texture feature and global BPE features. Texture features were computed from either a gray-level co-occurrence matrix or a gray level run length matrix. Ten global BPE features were also initially computed from two breast regions and bilateral difference between the left and right breasts. Box-plot based analysis shows positive association between texture features and BIRADS rating levels in the second screening. Furthermore, a logistic regression model was built using optimal features selected by a CFS based feature selection method. Using a leave-one-case-out based cross-validation method, classification yielded an overall 75% accuracy in predicting the improvement (or downgrade) of diagnostic status (to BIRAD-1) in the subsequent breast MRI screening. This study demonstrated potential of developing a new quantitative imaging marker to predict diagnostic status change in the short-term, which may help eliminate a high fraction of unnecessary repeated breast MRI screenings and increase the cancer detection yield.

  20. Fully convolutional networks (FCNs)-based segmentation method for colorectal tumors on T2-weighted magnetic resonance images.

    PubMed

    Jian, Junming; Xiong, Fei; Xia, Wei; Zhang, Rui; Gu, Jinhui; Wu, Xiaodong; Meng, Xiaochun; Gao, Xin

    2018-06-01

    Segmentation of colorectal tumors is the basis of preoperative prediction, staging, and therapeutic response evaluation. Due to the blurred boundary between lesions and normal colorectal tissue, it is hard to realize accurate segmentation. Routinely manual or semi-manual segmentation methods are extremely tedious, time-consuming, and highly operator-dependent. In the framework of FCNs, a segmentation method for colorectal tumor was presented. Normalization was applied to reduce the differences among images. Borrowing from transfer learning, VGG-16 was employed to extract features from normalized images. We conducted five side-output blocks from the last convolutional layer of each block of VGG-16 along the network, these side-output blocks can deep dive multiscale features, and produced corresponding predictions. Finally, all of the predictions from side-output blocks were fused to determine the final boundaries of the tumors. A quantitative comparison of 2772 colorectal tumor manual segmentation results from T2-weighted magnetic resonance images shows that the average Dice similarity coefficient, positive predictive value, specificity, sensitivity, Hammoude distance, and Hausdorff distance were 83.56, 82.67, 96.75, 87.85%, 0.2694, and 8.20, respectively. The proposed method is superior to U-net in colorectal tumor segmentation (P < 0.05). There is no difference between cross-entropy loss and Dice-based loss in colorectal tumor segmentation (P > 0.05). The results indicate that the introduction of FCNs contributed to accurate segmentation of colorectal tumors. This method has the potential to replace the present time-consuming and nonreproducible manual segmentation method.

  1. Automated diagnoses of attention deficit hyperactive disorder using magnetic resonance imaging.

    PubMed

    Eloyan, Ani; Muschelli, John; Nebel, Mary Beth; Liu, Han; Han, Fang; Zhao, Tuo; Barber, Anita D; Joel, Suresh; Pekar, James J; Mostofsky, Stewart H; Caffo, Brian

    2012-01-01

    Successful automated diagnoses of attention deficit hyperactive disorder (ADHD) using imaging and functional biomarkers would have fundamental consequences on the public health impact of the disease. In this work, we show results on the predictability of ADHD using imaging biomarkers and discuss the scientific and diagnostic impacts of the research. We created a prediction model using the landmark ADHD 200 data set focusing on resting state functional connectivity (rs-fc) and structural brain imaging. We predicted ADHD status and subtype, obtained by behavioral examination, using imaging data, intelligence quotients and other covariates. The novel contributions of this manuscript include a thorough exploration of prediction and image feature extraction methodology on this form of data, including the use of singular value decompositions (SVDs), CUR decompositions, random forest, gradient boosting, bagging, voxel-based morphometry, and support vector machines as well as important insights into the value, and potentially lack thereof, of imaging biomarkers of disease. The key results include the CUR-based decomposition of the rs-fc-fMRI along with gradient boosting and the prediction algorithm based on a motor network parcellation and random forest algorithm. We conjecture that the CUR decomposition is largely diagnosing common population directions of head motion. Of note, a byproduct of this research is a potential automated method for detecting subtle in-scanner motion. The final prediction algorithm, a weighted combination of several algorithms, had an external test set specificity of 94% with sensitivity of 21%. The most promising imaging biomarker was a correlation graph from a motor network parcellation. In summary, we have undertaken a large-scale statistical exploratory prediction exercise on the unique ADHD 200 data set. The exercise produced several potential leads for future scientific exploration of the neurological basis of ADHD.

  2. Automated diagnoses of attention deficit hyperactive disorder using magnetic resonance imaging

    PubMed Central

    Eloyan, Ani; Muschelli, John; Nebel, Mary Beth; Liu, Han; Han, Fang; Zhao, Tuo; Barber, Anita D.; Joel, Suresh; Pekar, James J.; Mostofsky, Stewart H.; Caffo, Brian

    2012-01-01

    Successful automated diagnoses of attention deficit hyperactive disorder (ADHD) using imaging and functional biomarkers would have fundamental consequences on the public health impact of the disease. In this work, we show results on the predictability of ADHD using imaging biomarkers and discuss the scientific and diagnostic impacts of the research. We created a prediction model using the landmark ADHD 200 data set focusing on resting state functional connectivity (rs-fc) and structural brain imaging. We predicted ADHD status and subtype, obtained by behavioral examination, using imaging data, intelligence quotients and other covariates. The novel contributions of this manuscript include a thorough exploration of prediction and image feature extraction methodology on this form of data, including the use of singular value decompositions (SVDs), CUR decompositions, random forest, gradient boosting, bagging, voxel-based morphometry, and support vector machines as well as important insights into the value, and potentially lack thereof, of imaging biomarkers of disease. The key results include the CUR-based decomposition of the rs-fc-fMRI along with gradient boosting and the prediction algorithm based on a motor network parcellation and random forest algorithm. We conjecture that the CUR decomposition is largely diagnosing common population directions of head motion. Of note, a byproduct of this research is a potential automated method for detecting subtle in-scanner motion. The final prediction algorithm, a weighted combination of several algorithms, had an external test set specificity of 94% with sensitivity of 21%. The most promising imaging biomarker was a correlation graph from a motor network parcellation. In summary, we have undertaken a large-scale statistical exploratory prediction exercise on the unique ADHD 200 data set. The exercise produced several potential leads for future scientific exploration of the neurological basis of ADHD. PMID:22969709

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Apte, A; Veeraraghavan, H; Oh, J

    Purpose: To present an open source and free platform to facilitate radiomics research — The “Radiomics toolbox” in CERR. Method: There is scarcity of open source tools that support end-to-end modeling of image features to predict patient outcomes. The “Radiomics toolbox” strives to fill the need for such a software platform. The platform supports (1) import of various kinds of image modalities like CT, PET, MR, SPECT, US. (2) Contouring tools to delineate structures of interest. (3) Extraction and storage of image based features like 1st order statistics, gray-scale co-occurrence and zonesize matrix based texture features and shape features andmore » (4) Statistical Analysis. Statistical analysis of the extracted features is supported with basic functionality that includes univariate correlations, Kaplan-Meir curves and advanced functionality that includes feature reduction and multivariate modeling. The graphical user interface and the data management are performed with Matlab for the ease of development and readability of code and features for wide audience. Open-source software developed with other programming languages is integrated to enhance various components of this toolbox. For example: Java-based DCM4CHE for import of DICOM, R for statistical analysis. Results: The Radiomics toolbox will be distributed as an open source, GNU copyrighted software. The toolbox was prototyped for modeling Oropharyngeal PET dataset at MSKCC. The analysis will be presented in a separate paper. Conclusion: The Radiomics Toolbox provides an extensible platform for extracting and modeling image features. To emphasize new uses of CERR for radiomics and image-based research, we have changed the name from the “Computational Environment for Radiotherapy Research” to the “Computational Environment for Radiological Research”.« less

  4. Forensic Comparison and Matching of Fingerprints: Using Quantitative Image Measures for Estimating Error Rates through Understanding and Predicting Difficulty

    PubMed Central

    Kellman, Philip J.; Mnookin, Jennifer L.; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E.

    2014-01-01

    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and subjective assessment of difficulty in fingerprint comparisons. PMID:24788812

  5. SU-E-J-107: Supervised Learning Model of Aligned Collagen for Human Breast Carcinoma Prognosis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bredfeldt, J; Liu, Y; Conklin, M

    Purpose: Our goal is to develop and apply a set of optical and computational tools to enable large-scale investigations of the interaction between collagen and tumor cells. Methods: We have built a novel imaging system for automating the capture of whole-slide second harmonic generation (SHG) images of collagen in registry with bright field (BF) images of hematoxylin and eosin stained tissue. To analyze our images, we have integrated a suite of supervised learning tools that semi-automatically model and score collagen interactions with tumor cells via a variety of metrics, a method we call Electronic Tumor Associated Collagen Signatures (eTACS). Thismore » group of tools first segments regions of epithelial cells and collagen fibers from BF and SHG images respectively. We then associate fibers with groups of epithelial cells and finally compute features based on the angle of interaction and density of the collagen surrounding the epithelial cell clusters. These features are then processed with a support vector machine to separate cancer patients into high and low risk groups. Results: We validated our model by showing that eTACS produces classifications that have statistically significant correlation with manual classifications. In addition, our system generated classification scores that accurately predicted breast cancer patient survival in a cohort of 196 patients. Feature rank analysis revealed that TACS positive fibers are more well aligned with each other, generally lower density, and terminate within or near groups of epithelial cells. Conclusion: We are working to apply our model to predict survival in larger cohorts of breast cancer patients with a diversity of breast cancer types, predict response to treatments such as COX2 inhibitors, and to study collagen architecture changes in other cancer types. In the future, our system may be used to provide metastatic potential information to cancer patients to augment existing clinical assays.« less

  6. Prediction of treatment outcome in soft tissue sarcoma based on radiologically defined habitats

    NASA Astrophysics Data System (ADS)

    Farhidzadeh, Hamidreza; Chaudhury, Baishali; Zhou, Mu; Goldgof, Dmitry B.; Hall, Lawrence O.; Gatenby, Robert A.; Gillies, Robert J.; Raghavan, Meera

    2015-03-01

    Soft tissue sarcomas are malignant tumors which develop from tissues like fat, muscle, nerves, fibrous tissue or blood vessels. They are challenging to physicians because of their relative infrequency and diverse outcomes, which have hindered development of new therapeutic agents. Additionally, assessing imaging response of these tumors to therapy is also difficult because of their heterogeneous appearance on magnetic resonance imaging (MRI). In this paper, we assessed standard of care MRI sequences performed before and after treatment using 36 patients with soft tissue sarcoma. Tumor tissue was identified by manually drawing a mask on contrast enhanced images. The Otsu segmentation method was applied to segment tumor tissue into low and high signal intensity regions on both T1 post-contrast and T2 without contrast images. This resulted in four distinctive subregions or "habitats." The features used to predict metastatic tumors and necrosis included the ratio of habitat size to whole tumor size and components of 2D intensity histograms. Individual cases were correctly classified as metastatic or non-metastatic disease with 80.55% accuracy and for necrosis ≥ 90 or necrosis <90 with 75.75% accuracy by using meta-classifiers which contained feature selectors and classifiers.

  7. 3-D in vitro estimation of temperature using the change in backscattered ultrasonic energy.

    PubMed

    Arthur, R Martin; Basu, Debomita; Guo, Yuzheng; Trobaugh, Jason W; Moros, Eduardo G

    2010-08-01

    Temperature imaging with a non-invasive modality to monitor the heating of tumors during hyperthermia treatment is an attractive alternative to sparse invasive measurement. Previously, we predicted monotonic changes in backscattered energy (CBE) of ultrasound with temperature for certain sub-wavelength scatterers. We also measured CBE values similar to our predictions in bovine liver, turkey breast muscle, and pork rib muscle in 2-D in vitro studies and in nude mice during 2-D in vivo studies. To extend these studies to three dimensions, we compensated for motion and measured CBE in turkey breast muscle. 3-D data sets were assembled from images formed by a phased-array imager with a 7.5-MHz linear probe moved in 0.6-mm steps in elevation during uniform heating from 37 to 45 degrees C in 0.5 degrees C increments. We used cross-correlation as a similarity measure in RF signals to automatically track feature displacement as a function of temperature. Feature displacement was non-rigid. Envelopes of image regions, compensated for non-rigid motion, were found with the Hilbert transform then smoothed with a 3 x 3 running average filter before forming the backscattered energy at each pixel. CBE in 3-D motion-compensated images was nearly linear with an average sensitivity of 0.30 dB/ degrees C. 3-D estimation of temperature in separate tissue regions had errors with a maximum standard deviation of about 0.5 degrees C over 1-cm(3) volumes. Success of CBE temperature estimation based on 3-D non-rigid tracking and compensation for real and apparent motion of image features could serve as the foundation for the eventual generation of 3-D temperature maps in soft tissue in a non-invasive, convenient, and low-cost way in clinical hyperthermia.

  8. There's Waldo! A Normalization Model of Visual Search Predicts Single-Trial Human Fixations in an Object Search Task

    PubMed Central

    Miconi, Thomas; Groomes, Laura; Kreiman, Gabriel

    2016-01-01

    When searching for an object in a scene, how does the brain decide where to look next? Visual search theories suggest the existence of a global “priority map” that integrates bottom-up visual information with top-down, target-specific signals. We propose a mechanistic model of visual search that is consistent with recent neurophysiological evidence, can localize targets in cluttered images, and predicts single-trial behavior in a search task. This model posits that a high-level retinotopic area selective for shape features receives global, target-specific modulation and implements local normalization through divisive inhibition. The normalization step is critical to prevent highly salient bottom-up features from monopolizing attention. The resulting activity pattern constitues a priority map that tracks the correlation between local input and target features. The maximum of this priority map is selected as the locus of attention. The visual input is then spatially enhanced around the selected location, allowing object-selective visual areas to determine whether the target is present at this location. This model can localize objects both in array images and when objects are pasted in natural scenes. The model can also predict single-trial human fixations, including those in error and target-absent trials, in a search task involving complex objects. PMID:26092221

  9. SU-F-J-199: Predictive Models for Cone Beam CT-Based Online Verification of Pencil Beam Scanning Proton Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yin, L; Lin, A; Ahn, P

    Purpose: To utilize online CBCT scans to develop models for predicting DVH metrics in proton therapy of head and neck tumors. Methods: Nine patients with locally advanced oropharyngeal cancer were retrospectively selected in this study. Deformable image registration was applied to the simulation CT, target volumes, and organs at risk (OARs) contours onto each weekly CBCT scan. Intensity modulated proton therapy (IMPT) treatment plans were created on the simulation CT and forward calculated onto each corrected CBCT scan. Thirty six potentially predictive metrics were extracted from each corrected CBCT. These features include minimum/maximum/mean over and under-ranges at the proximal andmore » distal surface of PTV volumes, and geometrical and water equivalent distance between PTV and each OARs. Principal component analysis (PCA) was used to reduce the dimension of the extracted features. Three principal components were found to account for over 90% of variances in those features. Datasets from eight patients were used to train a machine learning model to fit these principal components with DVH metrics (dose to 95% and 5% of PTV, mean dose or max dose to OARs) from the forward calculated dose on each corrected CBCT. The accuracy of this model was verified on the datasets from the 9th patient. Results: The predicted changes of DVH metrics from the model were in good agreement with actual values calculated on corrected CBCT images. Median differences were within 1 Gy for most DVH metrics except for larynx and constrictor mean dose. However, a large spread of the differences was observed, indicating additional training datasets and predictive features are needed to improve the model. Conclusion: Intensity corrected CBCT scans hold the potential to be used for online verification of proton therapy and prediction of delivered dose distributions.« less

  10. SU-E-J-252: Reproducibility of Radiogenomic Image Features: Comparison of Two Semi-Automated Segmentation Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, M; Woo, B; Kim, J

    Purpose: Objective and reliable quantification of imaging phenotype is an essential part of radiogenomic studies. We compared the reproducibility of two semi-automatic segmentation methods for quantitative image phenotyping in magnetic resonance imaging (MRI) of glioblastoma multiforme (GBM). Methods: MRI examinations with T1 post-gadolinium and FLAIR sequences of 10 GBM patients were downloaded from the Cancer Image Archive site. Two semi-automatic segmentation tools with different algorithms (deformable model and grow cut method) were used to segment contrast enhancement, necrosis and edema regions by two independent observers. A total of 21 imaging features consisting of area and edge groups were extracted automaticallymore » from the segmented tumor. The inter-observer variability and coefficient of variation (COV) were calculated to evaluate the reproducibility. Results: Inter-observer correlations and coefficient of variation of imaging features with the deformable model ranged from 0.953 to 0.999 and 2.1% to 9.2%, respectively, and the grow cut method ranged from 0.799 to 0.976 and 3.5% to 26.6%, respectively. Coefficient of variation for especially important features which were previously reported as predictive of patient survival were: 3.4% with deformable model and 7.4% with grow cut method for the proportion of contrast enhanced tumor region; 5.5% with deformable model and 25.7% with grow cut method for the proportion of necrosis; and 2.1% with deformable model and 4.4% with grow cut method for edge sharpness of tumor on CE-T1W1. Conclusion: Comparison of two semi-automated tumor segmentation techniques shows reliable image feature extraction for radiogenomic analysis of GBM patients with multiparametric Brain MRI.« less

  11. Classifying Alzheimer's disease with brain imaging and genetic data using a neural network framework.

    PubMed

    Ning, Kaida; Chen, Bo; Sun, Fengzhu; Hobel, Zachary; Zhao, Lu; Matloff, Will; Toga, Arthur W

    2018-08-01

    A long-standing question is how to best use brain morphometric and genetic data to distinguish Alzheimer's disease (AD) patients from cognitively normal (CN) subjects and to predict those who will progress from mild cognitive impairment (MCI) to AD. Here, we use a neural network (NN) framework on both magnetic resonance imaging-derived quantitative structural brain measures and genetic data to address this question. We tested the effectiveness of NN models in classifying and predicting AD. We further performed a novel analysis of the NN model to gain insight into the most predictive imaging and genetics features and to identify possible interactions between features that affect AD risk. Data were obtained from the AD Neuroimaging Initiative cohort and included baseline structural MRI data and single nucleotide polymorphism (SNP) data for 138 AD patients, 225 CN subjects, and 358 MCI patients. We found that NN models with both brain and SNP features as predictors perform significantly better than models with either alone in classifying AD and CN subjects, with an area under the receiver operating characteristic curve (AUC) of 0.992, and in predicting the progression from MCI to AD (AUC=0.835). The most important predictors in the NN model were the left middle temporal gyrus volume, the left hippocampus volume, the right entorhinal cortex volume, and the APOE (a gene that encodes apolipoprotein E) ɛ4 risk allele. Furthermore, we identified interactions between the right parahippocampal gyrus and the right lateral occipital gyrus, the right banks of the superior temporal sulcus and the left posterior cingulate, and SNP rs10838725 and the left lateral occipital gyrus. Our work shows the ability of NN models to not only classify and predict AD occurrence but also to identify important AD risk factors and interactions among them. Copyright © 2018 Elsevier Inc. All rights reserved.

  12. The Massachusetts abscess rule: a clinical decision rule using ultrasound to identify methicillin-resistant Staphylococcus aureus in skin abscesses.

    PubMed

    Gaspari, Romolo J; Blehar, David; Polan, David; Montoya, Anthony; Alsulaibikh, Amal; Liteplo, Andrew

    2014-05-01

    Treatment failure rates for incision and drainage (I&D) of skin abscesses have increased in recent years and may be attributable to an increased prevalence of community-acquired methicillin-resistant Staphylococcus aureus (CA-MRSA). Previous authors have described sonographic features of abscesses, such as the presence of interstitial fluid, characteristics of abscess debris, and depth of abscess cavity. It is possible that the sonographic features are associated with MRSA and can be used to predict the presence of MRSA. The authors describe a potential clinical decision rule (CDR) using sonographic images to predict the presence of CA-MRSA. This was a pilot CDR derivation study using databases from two emergency departments (EDs) of patients presenting to the ED with uncomplicated skin abscesses who underwent I&D and culture of the abscess contents. Patients underwent ultrasound (US) imaging of the abscesses prior to I&D. Abscess contents were sent for culture and sensitivity. Two independent physicians experienced in soft tissue US blinded to the culture results and clinical data reviewed the images in a standardized fashion for the presence or absence of the predetermined image characteristics. In the instance of a disagreement between the initial two investigators, a third reviewer adjudicated the findings prior to analysis. The association between the primary outcome (presence of MRSA) and each sonographic feature was assessed using univariate and multivariate analysis. The reliability of each sonographic feature was measured by calculating the kappa (κ) coefficient of interobserver agreement. The decision tree model for the CDR was created with recursive partitioning using variables that were both reliable and strongly associated with MRSA. Of the total of 2,167 patients who presented with skin and soft tissue infections during the study period, 605 patients met inclusion criteria with US imaging and culture and sensitivity of purulence. Among the pathogenic organisms, MRSA was the most frequently isolated, representing 50.1% of all patients. Six of the sonographic features were associated with the presence of MRSA, but only four of these features were reliable using the kappa analysis. Recursive partitioning identified three independent variables that were both associated with MRSA and reliable: 1) the lack of a well-defined edge, 2) small volume, and 3) irregular or indistinct shape. This decision rule demonstrates a sensitivity of 89.2% (95% confidence interval [CI] = 84.7% to 92.7%), a specificity of 44.7% (95% CI = 40.9% to 47.8%), a positive predictive value of 57.9 (95% CI = 55.0 to 60.2), a negative predictive value of 82.9 (95% CI = 75.9 to 88.5), and an odds ratio (OR) of 7.0 (95% CI = 4.0 to 12.2). According to our putative CDR, patients with skin abscesses that are small, irregularly shaped, or indistinct, with ill-defined edges, are seven times more likely to demonstrate MRSA on culture. © 2014 by the Society for Academic Emergency Medicine.

  13. Breast cancer molecular subtype classification using deep features: preliminary results

    NASA Astrophysics Data System (ADS)

    Zhu, Zhe; Albadawy, Ehab; Saha, Ashirbani; Zhang, Jun; Harowicz, Michael R.; Mazurowski, Maciej A.

    2018-02-01

    Radiogenomics is a field of investigation that attempts to examine the relationship between imaging characteris- tics of cancerous lesions and their genomic composition. This could offer a noninvasive alternative to establishing genomic characteristics of tumors and aid cancer treatment planning. While deep learning has shown its supe- riority in many detection and classification tasks, breast cancer radiogenomic data suffers from a very limited number of training examples, which renders the training of the neural network for this problem directly and with no pretraining a very difficult task. In this study, we investigated an alternative deep learning approach referred to as deep features or off-the-shelf network approach to classify breast cancer molecular subtypes using breast dynamic contrast enhanced MRIs. We used the feature maps of different convolution layers and fully connected layers as features and trained support vector machines using these features for prediction. For the feature maps that have multiple layers, max-pooling was performed along each channel. We focused on distinguishing the Luminal A subtype from other subtypes. To evaluate the models, 10 fold cross-validation was performed and the final AUC was obtained by averaging the performance of all the folds. The highest average AUC obtained was 0.64 (0.95 CI: 0.57-0.71), using the feature maps of the last fully connected layer. This indicates the promise of using this approach to predict the breast cancer molecular subtypes. Since the best performance appears in the last fully connected layer, it also implies that breast cancer molecular subtypes may relate to high level image features

  14. Can CT imaging features of ground-glass opacity predict invasiveness? A meta-analysis.

    PubMed

    Dai, Jian; Yu, Guoyou; Yu, Jianqiang

    2018-04-01

    A meta-analysis was conducted to investigate the diagnostic performance of computed tomography (CT) imaging features of ground-glass opacity (GGO) to predict invasiveness. Two reviewers independently searched PubMed, Medline, Web of Science, Cochrane Embase and CNKI for relevant studies. CT imaging signs of bubble lucency, speculation, lobulated margin, and pleural indentation were used as diagnostic references to discriminate pre-invasive and invasive disease. The sensitivity, specificity, diagnostic odds ratio (DOR), summary receiver operating characteristic (SROC) curves, and the area under the SROC curve (AUC) were calculated to evaluate diagnostic efficiency. Twelve studies were finally included. Diagnostic performance ranged from 0.41 to 0.52 for sensitivity and 0.56 to 0.63 for specificity. The diagnostic positive and negative likelihood ratios ranged from 1.03 to 2.13 and 0.52 to 1.05, respectively. The DORs of the GGO CT features for discriminating invasive disease ranged from 1.02 to 4.00. The area under the ROC curve was also low, with a range of 0.60 to 0.67 for discriminating pre-invasive and invasive disease. The diagnostic value of a single CT imaging sign of GGO, such as bubble lucency, speculation, lobulated margin, or pleural indentation is limited for discriminating pre-invasive and invasive disease because of low sensitivity, specificity, and AUC. © 2018 The Authors. Thoracic Cancer published by China Lung Oncology Group and John Wiley & Sons Australia, Ltd.

  15. The value of specific MRI features in the evaluation of suspected placental invasion.

    PubMed

    Lax, Allison; Prince, Martin R; Mennitt, Kevin W; Schwebach, J Reid; Budorick, Nancy E

    2007-01-01

    The objective of this study was to determine imaging features that may help predict the presence of placenta accreta, placenta increta or placenta percreta on prenatal MRI scanning. A retrospective review of the prenatal MR scans of 10 patients with a diagnosis of placenta accreta, placenta increta or placenta percreta made by pathologic and clinical reports and of 10 patients without placental invasion was performed. Two expert MRI readers were blinded to the patients' true diagnosis and were asked to score a total of 17 MRI features of the placenta and adjacent structures. The interrater reliability was assessed using kappa statistics. The features with a moderate kappa statistic or better (kappa > .40) were then compared with the true diagnosis for each observer. Seven of the scored features had an interobserver reliability of kappa > .40: placenta previa (kappa = .83); abnormal uterine bulging (kappa = .48); intraplacental hemorrhage (kappa = .51); heterogeneity of signal intensity on T2-weighted (T2W) imaging (kappa = .61); the presence of dark intraplacental bands on T2W imaging (kappa = .53); increased placental thickness (kappa = .69); and visualization of the myometrium beneath the placenta on T2W imaging (kappa = .44). Using Fisher's two-sided exact test, there was a statistically significant difference between the proportion of patients with placental invasion and those without placental invasion for three of the features: abnormal uterine bulging (Rater 1, P = .005; Rater 2, P = .011); heterogeneity of T2W imaging signal intensity (Rater 1, P = .006; Rater 2, P = .010); and presence of dark intraplacental bands on T2W imaging (Rater 1, P = .003; Rater 2, P = .033). MRI can be a useful adjunct to ultrasound in diagnosing placenta accreta prenatally. Three features that are seen on MRI in patients with placental invasion appear to be useful for diagnosis: uterine bulging; heterogeneous signal intensity within the placenta; and the presence of dark intraplacental bands on T2W imaging.

  16. Fixed versus mixed RSA: Explaining visual representations by fixed and mixed feature sets from shallow and deep computational models.

    PubMed

    Khaligh-Razavi, Seyed-Mahdi; Henriksson, Linda; Kay, Kendrick; Kriegeskorte, Nikolaus

    2017-02-01

    Studies of the primate visual system have begun to test a wide range of complex computational object-vision models. Realistic models have many parameters, which in practice cannot be fitted using the limited amounts of brain-activity data typically available. Task performance optimization (e.g. using backpropagation to train neural networks) provides major constraints for fitting parameters and discovering nonlinear representational features appropriate for the task (e.g. object classification). Model representations can be compared to brain representations in terms of the representational dissimilarities they predict for an image set. This method, called representational similarity analysis (RSA), enables us to test the representational feature space as is (fixed RSA) or to fit a linear transformation that mixes the nonlinear model features so as to best explain a cortical area's representational space (mixed RSA). Like voxel/population-receptive-field modelling, mixed RSA uses a training set (different stimuli) to fit one weight per model feature and response channel (voxels here), so as to best predict the response profile across images for each response channel. We analysed response patterns elicited by natural images, which were measured with functional magnetic resonance imaging (fMRI). We found that early visual areas were best accounted for by shallow models, such as a Gabor wavelet pyramid (GWP). The GWP model performed similarly with and without mixing, suggesting that the original features already approximated the representational space, obviating the need for mixing. However, a higher ventral-stream visual representation (lateral occipital region) was best explained by the higher layers of a deep convolutional network and mixing of its feature set was essential for this model to explain the representation. We suspect that mixing was essential because the convolutional network had been trained to discriminate a set of 1000 categories, whose frequencies in the training set did not match their frequencies in natural experience or their behavioural importance. The latter factors might determine the representational prominence of semantic dimensions in higher-level ventral-stream areas. Our results demonstrate the benefits of testing both the specific representational hypothesis expressed by a model's original feature space and the hypothesis space generated by linear transformations of that feature space.

  17. A Dynamic Graph Cuts Method with Integrated Multiple Feature Maps for Segmenting Kidneys in 2D Ultrasound Images.

    PubMed

    Zheng, Qiang; Warner, Steven; Tasian, Gregory; Fan, Yong

    2018-02-12

    Automatic segmentation of kidneys in ultrasound (US) images remains a challenging task because of high speckle noise, low contrast, and large appearance variations of kidneys in US images. Because texture features may improve the US image segmentation performance, we propose a novel graph cuts method to segment kidney in US images by integrating image intensity information and texture feature maps. We develop a new graph cuts-based method to segment kidney US images by integrating original image intensity information and texture feature maps extracted using Gabor filters. To handle large appearance variation within kidney images and improve computational efficiency, we build a graph of image pixels close to kidney boundary instead of building a graph of the whole image. To make the kidney segmentation robust to weak boundaries, we adopt localized regional information to measure similarity between image pixels for computing edge weights to build the graph of image pixels. The localized graph is dynamically updated and the graph cuts-based segmentation iteratively progresses until convergence. Our method has been evaluated based on kidney US images of 85 subjects. The imaging data of 20 randomly selected subjects were used as training data to tune parameters of the image segmentation method, and the remaining data were used as testing data for validation. Experiment results demonstrated that the proposed method obtained promising segmentation results for bilateral kidneys (average Dice index = 0.9446, average mean distance = 2.2551, average specificity = 0.9971, average accuracy = 0.9919), better than other methods under comparison (P < .05, paired Wilcoxon rank sum tests). The proposed method achieved promising performance for segmenting kidneys in two-dimensional US images, better than segmentation methods built on any single channel of image information. This method will facilitate extraction of kidney characteristics that may predict important clinical outcomes such as progression of chronic kidney disease. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  18. An investigation to improve the Menhaden fishery prediction and detection model through the application of ERTS-A data

    NASA Technical Reports Server (NTRS)

    Maughan, P. M. (Principal Investigator)

    1972-01-01

    The author has identified the following significant results. Preliminary analyses indicate that several important relationships have been observed utilizing ERTS-1 imagery. Of most significance is that in the Mississippi Sound, as elsewhere, considerable detail exists as to turbidity patterns in the water column. Simple analysis is complicated by the apparent interaction between actual turbidity, turbidity induced by shoal water, and actual imaging of the bottom in extreme shoal water. A statistical approach is being explored which shows promise of at least partially separating these effects so that partitioning of true turbid plumes can be accomplished. This partitioning is of great importance to this program in that supportive data seem to indicate that menhaden occur more frequently in turbid areas. In this connection four individual captures have been associated with a major turbid feature imaged on 6 August. If a significant relationship between imaged turbid features and catch distribution can be established, for example by graphic and/or numeric analysis, it will represent a major advancement for short term prediction of commercially accessible menhaden.

  19. Rapid Target Detection in High Resolution Remote Sensing Images Using Yolo Model

    NASA Astrophysics Data System (ADS)

    Wu, Z.; Chen, X.; Gao, Y.; Li, Y.

    2018-04-01

    Object detection in high resolution remote sensing images is a fundamental and challenging problem in the field of remote sensing imagery analysis for civil and military application due to the complex neighboring environments, which can cause the recognition algorithms to mistake irrelevant ground objects for target objects. Deep Convolution Neural Network(DCNN) is the hotspot in object detection for its powerful ability of feature extraction and has achieved state-of-the-art results in Computer Vision. Common pipeline of object detection based on DCNN consists of region proposal, CNN feature extraction, region classification and post processing. YOLO model frames object detection as a regression problem, using a single CNN predicts bounding boxes and class probabilities in an end-to-end way and make the predict faster. In this paper, a YOLO based model is used for object detection in high resolution sensing images. The experiments on NWPU VHR-10 dataset and our airport/airplane dataset gain from GoogleEarth show that, compare with the common pipeline, the proposed model speeds up the detection process and have good accuracy.

  20. Assessing product image quality for online shopping

    NASA Astrophysics Data System (ADS)

    Goswami, Anjan; Chung, Sung H.; Chittar, Naren; Islam, Atiq

    2012-01-01

    Assessing product-image quality is important in the context of online shopping. A high quality image that conveys more information about a product can boost the buyer's confidence and can get more attention. However, the notion of image quality for product-images is not the same as that in other domains. The perception of quality of product-images depends not only on various photographic quality features but also on various high level features such as clarity of the foreground or goodness of the background etc. In this paper, we define a notion of product-image quality based on various such features. We conduct a crowd-sourced experiment to collect user judgments on thousands of eBay's images. We formulate a multi-class classification problem for modeling image quality by classifying images into good, fair and poor quality based on the guided perceptual notions from the judges. We also conduct experiments with regression using average crowd-sourced human judgments as target. We compute a pseudo-regression score with expected average of predicted classes and also compute a score from the regression technique. We design many experiments with various sampling and voting schemes with crowd-sourced data and construct various experimental image quality models. Most of our models have reasonable accuracies (greater or equal to 70%) on test data set. We observe that our computed image quality score has a high (0.66) rank correlation with average votes from the crowd sourced human judgments.

  1. Computer-aided diagnostic system for detection of Hashimoto thyroiditis on ultrasound images from a Polish population.

    PubMed

    Acharya, U Rajendra; Sree, S Vinitha; Krishnan, M Muthu Rama; Molinari, Filippo; Zieleźnik, Witold; Bardales, Ricardo H; Witkowska, Agnieszka; Suri, Jasjit S

    2014-02-01

    Computer-aided diagnostic (CAD) techniques aid physicians in better diagnosis of diseases by extracting objective and accurate diagnostic information from medical data. Hashimoto thyroiditis is the most common type of inflammation of the thyroid gland. The inflammation changes the structure of the thyroid tissue, and these changes are reflected as echogenic changes on ultrasound images. In this work, we propose a novel CAD system (a class of systems called ThyroScan) that extracts textural features from a thyroid sonogram and uses them to aid in the detection of Hashimoto thyroiditis. In this paradigm, we extracted grayscale features based on stationary wavelet transform from 232 normal and 294 Hashimoto thyroiditis-affected thyroid ultrasound images obtained from a Polish population. Significant features were selected using a Student t test. The resulting feature vectors were used to build and evaluate the following 4 classifiers using a 10-fold stratified cross-validation technique: support vector machine, decision tree, fuzzy classifier, and K-nearest neighbor. Using 7 significant features that characterized the textural changes in the images, the fuzzy classifier had the highest classification accuracy of 84.6%, sensitivity of 82.8%, specificity of 87.0%, and a positive predictive value of 88.9%. The proposed ThyroScan CAD system uses novel features to noninvasively detect the presence of Hashimoto thyroiditis on ultrasound images. Compared to manual interpretations of ultrasound images, the CAD system offers a more objective interpretation of the nature of the thyroid. The preliminary results presented in this work indicate the possibility of using such a CAD system in a clinical setting after evaluating it with larger databases in multicenter clinical trials.

  2. Non-destructive Determination of Shikimic Acid Concentration in Transgenic Maize Exhibiting Glyphosate Tolerance Using Chlorophyll Fluorescence and Hyperspectral Imaging

    PubMed Central

    Feng, Xuping; Yu, Chenliang; Chen, Yue; Peng, Jiyun; Ye, Lanhan; Shen, Tingting; Wen, Haiyong; He, Yong

    2018-01-01

    The development of transgenic glyphosate-tolerant crops has revolutionized weed control in crops in many regions of the world. The early, non-destructive identification of superior plant phenotypes is an important stage in plant breeding programs. Here, glyphosate-tolerant transgenic maize and its parental wild-type control were studied at 2, 4, 6, and 8 days after glyphosate treatment. Visible and near-infrared hyperspectral imaging and chlorophyll fluorescence imaging techniques were applied to monitor the performance of plants. In our research, transgenic maize, which was highly tolerant to glyphosate, was phenotyped using these high-throughput non-destructive methods to validate low levels of shikimic acid accumulation and high photochemical efficiency of photosystem II as reflected by maximum quantum yield and non-photochemical quenching in response to glyphosate. For hyperspectral imaging analysis, the combination of spectroscopy and chemometric methods was used to predict shikimic acid concentration. Our results indicated that a partial least-squares regression model, built on optimal wavelengths, effectively predicted shikimic acid concentrations, with a coefficient of determination value of 0.79 for the calibration set, and 0.82 for the prediction set. Moreover, shikimic acid concentration estimates from hyperspectral images were visualized on the prediction maps by spectral features, which could help in developing a simple multispectral imaging instrument for non-destructive phenotyping. Specific physiological effects of glyphosate affected the photochemical processes of maize, which induced substantial changes in chlorophyll fluorescence characteristics. A new data-driven method, combining mean fluorescence parameters and featuring a screening approach, provided a satisfactory relationship between fluorescence parameters and shikimic acid content. The glyphosate-tolerant transgenic plants can be identified with the developed discrimination model established on important wavelengths or sensitive fluorescence parameters 6 days after glyphosate treatment. The overall results indicated that both hyperspectral imaging and chlorophyll fluorescence imaging techniques could provide useful tools for stress phenotyping in maize breeding programs and could enable the detection and evaluation of superior genotypes, such as glyphosate tolerance, with a non-destructive high-throughput technique. PMID:29686693

  3. Non-destructive Determination of Shikimic Acid Concentration in Transgenic Maize Exhibiting Glyphosate Tolerance Using Chlorophyll Fluorescence and Hyperspectral Imaging.

    PubMed

    Feng, Xuping; Yu, Chenliang; Chen, Yue; Peng, Jiyun; Ye, Lanhan; Shen, Tingting; Wen, Haiyong; He, Yong

    2018-01-01

    The development of transgenic glyphosate-tolerant crops has revolutionized weed control in crops in many regions of the world. The early, non-destructive identification of superior plant phenotypes is an important stage in plant breeding programs. Here, glyphosate-tolerant transgenic maize and its parental wild-type control were studied at 2, 4, 6, and 8 days after glyphosate treatment. Visible and near-infrared hyperspectral imaging and chlorophyll fluorescence imaging techniques were applied to monitor the performance of plants. In our research, transgenic maize, which was highly tolerant to glyphosate, was phenotyped using these high-throughput non-destructive methods to validate low levels of shikimic acid accumulation and high photochemical efficiency of photosystem II as reflected by maximum quantum yield and non-photochemical quenching in response to glyphosate. For hyperspectral imaging analysis, the combination of spectroscopy and chemometric methods was used to predict shikimic acid concentration. Our results indicated that a partial least-squares regression model, built on optimal wavelengths, effectively predicted shikimic acid concentrations, with a coefficient of determination value of 0.79 for the calibration set, and 0.82 for the prediction set. Moreover, shikimic acid concentration estimates from hyperspectral images were visualized on the prediction maps by spectral features, which could help in developing a simple multispectral imaging instrument for non-destructive phenotyping. Specific physiological effects of glyphosate affected the photochemical processes of maize, which induced substantial changes in chlorophyll fluorescence characteristics. A new data-driven method, combining mean fluorescence parameters and featuring a screening approach, provided a satisfactory relationship between fluorescence parameters and shikimic acid content. The glyphosate-tolerant transgenic plants can be identified with the developed discrimination model established on important wavelengths or sensitive fluorescence parameters 6 days after glyphosate treatment. The overall results indicated that both hyperspectral imaging and chlorophyll fluorescence imaging techniques could provide useful tools for stress phenotyping in maize breeding programs and could enable the detection and evaluation of superior genotypes, such as glyphosate tolerance, with a non-destructive high-throughput technique.

  4. Semi-automatic segmentation of nonviable cardiac tissue using cine and delayed enhancement magnetic resonance images

    NASA Astrophysics Data System (ADS)

    O'Donnell, Thomas P.; Xu, Ning; Setser, Randolph M.; White, Richard D.

    2003-05-01

    Post myocardial infarction, the identification and assessment of non-viable (necrotic) tissues is necessary for effective development of intervention strategies and treatment plans. Delayed Enhancement Magnetic Resonance (DEMR) imaging is a technique whereby non-viable cardiac tissue appears with increased signal intensity. Radiologists typically acquire these images in conjunction with other functional modalities (e.g., MR Cine), and use domain knowledge and experience to isolate the non-viable tissues. In this paper, we present a technique for automatically segmenting these tissues given the delineation of myocardial borders in the DEMR and in the End-systolic and End-diastolic MR Cine images. Briefly, we obtain a set of segmentations furnished by an expert and employ an artificial intelligence technique, Support Vector Machines (SVMs), to "learn" the segmentations based on features culled from the images. Using those features we then allow the SVM to predict the segmentations the expert would provide on previously unseen images.

  5. Magnetic resonance imaging of injuries to the ankle joint: can it predict clinical outcome?

    PubMed

    Zanetti, M; De Simoni, C; Wetz, H H; Zollinger, H; Hodler, J

    1997-02-01

    To predict clinical outcome after ankle sprains on the basis of magnetic resonance (MR) findings. Twenty-nine consecutive patients (mean age 32.9 years, range 13-60 years) were examined clinically and with MR imaging both after trauma and following standardized conservative therapy. Various MR abnormalities were related to a clinical outcome score. There was a tendency for a better clinical outcome in partial, rather than complete, tears of the anterior talofibular ligament and when there was no fluid within the peroneal tendon sheath at the initial MR examination (P = 0.092 for either abnormality). A number of other MR features did not significantly influence clinical outcome, including the presence of a calcaneofibular ligament lesion and a bone bruise of the talar dome. Clinical outcome after ankle sprain cannot consistently be predicted by MR imaging, although MR imaging may be more accurate when the anterior talofibular ligament is only partially torn and there are no signs of injury to the peroneal tendon sheath.

  6. Prediction of standard-dose brain PET image by using MRI and low-dose brain [18F]FDG PET images.

    PubMed

    Kang, Jiayin; Gao, Yaozong; Shi, Feng; Lalush, David S; Lin, Weili; Shen, Dinggang

    2015-09-01

    Positron emission tomography (PET) is a nuclear medical imaging technology that produces 3D images reflecting tissue metabolic activity in human body. PET has been widely used in various clinical applications, such as in diagnosis of brain disorders. High-quality PET images play an essential role in diagnosing brain diseases/disorders. In practice, in order to obtain high-quality PET images, a standard-dose radionuclide (tracer) needs to be used and injected into a living body. As a result, it will inevitably increase the patient's exposure to radiation. One solution to solve this problem is predicting standard-dose PET images using low-dose PET images. As yet, no previous studies with this approach have been reported. Accordingly, in this paper, the authors propose a regression forest based framework for predicting a standard-dose brain [(18)F]FDG PET image by using a low-dose brain [(18)F]FDG PET image and its corresponding magnetic resonance imaging (MRI) image. The authors employ a regression forest for predicting the standard-dose brain [(18)F]FDG PET image by low-dose brain [(18)F]FDG PET and MRI images. Specifically, the proposed method consists of two main steps. First, based on the segmented brain tissues (i.e., cerebrospinal fluid, gray matter, and white matter) in the MRI image, the authors extract features for each patch in the brain image from both low-dose PET and MRI images to build tissue-specific models that can be used to initially predict standard-dose brain [(18)F]FDG PET images. Second, an iterative refinement strategy, via estimating the predicted image difference, is used to further improve the prediction accuracy. The authors evaluated their algorithm on a brain dataset, consisting of 11 subjects with MRI, low-dose PET, and standard-dose PET images, using leave-one-out cross-validations. The proposed algorithm gives promising results with well-estimated standard-dose brain [(18)F]FDG PET image and substantially enhanced image quality of low-dose brain [(18)F]FDG PET image. In this paper, the authors propose a framework to generate standard-dose brain [(18)F]FDG PET image using low-dose brain [(18)F]FDG PET and MRI images. Both the visual and quantitative results indicate that the standard-dose brain [(18)F]FDG PET can be well-predicted using MRI and low-dose brain [(18)F]FDG PET.

  7. Prediction of standard-dose brain PET image by using MRI and low-dose brain [18F]FDG PET images

    PubMed Central

    Kang, Jiayin; Gao, Yaozong; Shi, Feng; Lalush, David S.; Lin, Weili; Shen, Dinggang

    2015-01-01

    Purpose: Positron emission tomography (PET) is a nuclear medical imaging technology that produces 3D images reflecting tissue metabolic activity in human body. PET has been widely used in various clinical applications, such as in diagnosis of brain disorders. High-quality PET images play an essential role in diagnosing brain diseases/disorders. In practice, in order to obtain high-quality PET images, a standard-dose radionuclide (tracer) needs to be used and injected into a living body. As a result, it will inevitably increase the patient’s exposure to radiation. One solution to solve this problem is predicting standard-dose PET images using low-dose PET images. As yet, no previous studies with this approach have been reported. Accordingly, in this paper, the authors propose a regression forest based framework for predicting a standard-dose brain [18F]FDG PET image by using a low-dose brain [18F]FDG PET image and its corresponding magnetic resonance imaging (MRI) image. Methods: The authors employ a regression forest for predicting the standard-dose brain [18F]FDG PET image by low-dose brain [18F]FDG PET and MRI images. Specifically, the proposed method consists of two main steps. First, based on the segmented brain tissues (i.e., cerebrospinal fluid, gray matter, and white matter) in the MRI image, the authors extract features for each patch in the brain image from both low-dose PET and MRI images to build tissue-specific models that can be used to initially predict standard-dose brain [18F]FDG PET images. Second, an iterative refinement strategy, via estimating the predicted image difference, is used to further improve the prediction accuracy. Results: The authors evaluated their algorithm on a brain dataset, consisting of 11 subjects with MRI, low-dose PET, and standard-dose PET images, using leave-one-out cross-validations. The proposed algorithm gives promising results with well-estimated standard-dose brain [18F]FDG PET image and substantially enhanced image quality of low-dose brain [18F]FDG PET image. Conclusions: In this paper, the authors propose a framework to generate standard-dose brain [18F]FDG PET image using low-dose brain [18F]FDG PET and MRI images. Both the visual and quantitative results indicate that the standard-dose brain [18F]FDG PET can be well-predicted using MRI and low-dose brain [18F]FDG PET. PMID:26328979

  8. Prediction of standard-dose brain PET image by using MRI and low-dose brain [{sup 18}F]FDG PET images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Jiayin; Gao, Yaozong; Shi, Feng

    Purpose: Positron emission tomography (PET) is a nuclear medical imaging technology that produces 3D images reflecting tissue metabolic activity in human body. PET has been widely used in various clinical applications, such as in diagnosis of brain disorders. High-quality PET images play an essential role in diagnosing brain diseases/disorders. In practice, in order to obtain high-quality PET images, a standard-dose radionuclide (tracer) needs to be used and injected into a living body. As a result, it will inevitably increase the patient’s exposure to radiation. One solution to solve this problem is predicting standard-dose PET images using low-dose PET images. Asmore » yet, no previous studies with this approach have been reported. Accordingly, in this paper, the authors propose a regression forest based framework for predicting a standard-dose brain [{sup 18}F]FDG PET image by using a low-dose brain [{sup 18}F]FDG PET image and its corresponding magnetic resonance imaging (MRI) image. Methods: The authors employ a regression forest for predicting the standard-dose brain [{sup 18}F]FDG PET image by low-dose brain [{sup 18}F]FDG PET and MRI images. Specifically, the proposed method consists of two main steps. First, based on the segmented brain tissues (i.e., cerebrospinal fluid, gray matter, and white matter) in the MRI image, the authors extract features for each patch in the brain image from both low-dose PET and MRI images to build tissue-specific models that can be used to initially predict standard-dose brain [{sup 18}F]FDG PET images. Second, an iterative refinement strategy, via estimating the predicted image difference, is used to further improve the prediction accuracy. Results: The authors evaluated their algorithm on a brain dataset, consisting of 11 subjects with MRI, low-dose PET, and standard-dose PET images, using leave-one-out cross-validations. The proposed algorithm gives promising results with well-estimated standard-dose brain [{sup 18}F]FDG PET image and substantially enhanced image quality of low-dose brain [{sup 18}F]FDG PET image. Conclusions: In this paper, the authors propose a framework to generate standard-dose brain [{sup 18}F]FDG PET image using low-dose brain [{sup 18}F]FDG PET and MRI images. Both the visual and quantitative results indicate that the standard-dose brain [{sup 18}F]FDG PET can be well-predicted using MRI and low-dose brain [{sup 18}F]FDG PET.« less

  9. Comparison of conventional and automated breast volume ultrasound in the description and characterization of solid breast masses based on BI-RADS features.

    PubMed

    Kim, Hyunji; Cha, Joo Hee; Oh, Ha-Yeun; Kim, Hak Hee; Shin, Hee Jung; Chae, Eun Young

    2014-07-01

    To compare the performance of radiologists in the use of conventional ultrasound (US) and automated breast volume ultrasound (ABVU) for the characterization of benign and malignant solid breast masses based on breast imaging and reporting data system (BI-RADS) criteria. Conventional US and ABVU images were obtained in 87 patients with 106 solid breast masses (52 cancers, 54 benign lesions). Three experienced radiologists who were blinded to all examination results independently characterized the lesions and reported a BI-RADS assessment category and a level of suspicion of malignancy. The results were analyzed by calculation of Cohen's κ coefficient and by receiver operating characteristic (ROC) analysis. Assessment of the agreement of conventional US and ABVU indicated that the posterior echo feature was the most discordant feature of seven features (κ = 0.371 ± 0.225) and that orientation had the greatest agreement (κ = 0.608 ± 0.210). The final assessment showed substantial agreement (κ = 0.773 ± 0.104). The areas under the ROC curves (Az) for conventional US and ABVU were not statistically significant for each reader, but the mean Az values of conventional US and ABVU by multi-reader multi-case analysis were significantly different (conventional US 0.991, ABVU 0.963; 95 % CI -0.0471 to -0.0097). The means for sensitivity, specificity, positive predictive value, and negative predictive value of conventional US and ABVU did not differ significantly. There was substantial inter-observer agreement in the final assessment of solid breast masses by conventional US and ABVU. ROC analysis comparing the performance of conventional US and ABVU indicated a marginally significant difference in mean Az, but not in mean sensitivity, specificity, positive predictive value, or negative predictive value.

  10. Hyperspectral imaging technique for determination of pork freshness attributes

    NASA Astrophysics Data System (ADS)

    Li, Yongyu; Zhang, Leilei; Peng, Yankun; Tang, Xiuying; Chao, Kuanglin; Dhakal, Sagar

    2011-06-01

    Freshness of pork is an important quality attribute, which can vary greatly in storage and logistics. The specific objectives of this research were to develop a hyperspectral imaging system to predict pork freshness based on quality attributes such as total volatile basic-nitrogen (TVB-N), pH value and color parameters (L*,a*,b*). Pork samples were packed in seal plastic bags and then stored at 4°C. Every 12 hours. Hyperspectral scattering images were collected from the pork surface at the range of 400 nm to 1100 nm. Two different methods were performed to extract scattering feature spectra from the hyperspectral scattering images. First, the spectral scattering profiles at individual wavelengths were fitted accurately by a three-parameter Lorentzian distribution (LD) function; second, reflectance spectra were extracted from the scattering images. Partial Least Square Regression (PLSR) method was used to establish prediction models to predict pork freshness. The results showed that the PLSR models based on reflectance spectra was better than combinations of LD "parameter spectra" in prediction of TVB-N with a correlation coefficient (r) = 0.90, a standard error of prediction (SEP) = 7.80 mg/100g. Moreover, a prediction model for pork freshness was established by using a combination of TVB-N, pH and color parameters. It could give a good prediction results with r = 0.91 for pork freshness. The research demonstrated that hyperspectral scattering technique is a valid tool for real-time and nondestructive detection of pork freshness.

  11. Utility of Clinical Parameters and Multiparametric MRI as Predictive Factors for Differentiating Uterine Sarcoma From Atypical Leiomyoma.

    PubMed

    Bi, Qiu; Xiao, Zhibo; Lv, Fajin; Liu, Yao; Zou, Chunxia; Shen, Yiqing

    2018-02-05

    The objective of this study was to find clinical parameters and qualitative and quantitative magnetic resonance imaging (MRI) features for differentiating uterine sarcoma from atypical leiomyoma (ALM) preoperatively and to calculate predictive values for uterine sarcoma. Data from 60 patients with uterine sarcoma and 88 patients with ALM confirmed by surgery and pathology were collected. Clinical parameters, qualitative MRI features, diffusion-weighted imaging with apparent diffusion coefficient values, and quantitative parameters of dynamic contrast-enhanced MRI of these two tumor types were compared. Predictive values for uterine sarcoma were calculated using multivariable logistic regression. Patient clinical manifestations, tumor locations, margins, T2-weighted imaging signals, mean apparent diffusion coefficient values, minimum apparent diffusion coefficient values, and time-signal intensity curves of solid tumor components were obvious significant parameters for distinguishing between uterine sarcoma and ALM (all P <.001). Abnormal vaginal bleeding, tumors located mainly in the uterine cavity, ill-defined tumor margins, and mean apparent diffusion coefficient values of <1.272 × 10 -3  mm 2 /s were significant preoperative predictors of uterine sarcoma. When the overall scores of these four predictors were greater than or equal to 7 points, the sensitivity, the specificity, the accuracy, and the positive and negative predictive values were 88.9%, 99.9%, 95.7%, 97.0%, and 95.1%, respectively. The use of clinical parameters and multiparametric MRI as predictive factors was beneficial for diagnosing uterine sarcoma preoperatively. These findings could be helpful for guiding treatment decisions. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  12. Tongue Images Classification Based on Constrained High Dispersal Network.

    PubMed

    Meng, Dan; Cao, Guitao; Duan, Ye; Zhu, Minghua; Tu, Liping; Xu, Dong; Xu, Jiatuo

    2017-01-01

    Computer aided tongue diagnosis has a great potential to play important roles in traditional Chinese medicine (TCM). However, the majority of the existing tongue image analyses and classification methods are based on the low-level features, which may not provide a holistic view of the tongue. Inspired by deep convolutional neural network (CNN), we propose a novel feature extraction framework called constrained high dispersal neural networks (CHDNet) to extract unbiased features and reduce human labor for tongue diagnosis in TCM. Previous CNN models have mostly focused on learning convolutional filters and adapting weights between them, but these models have two major issues: redundancy and insufficient capability in handling unbalanced sample distribution. We introduce high dispersal and local response normalization operation to address the issue of redundancy. We also add multiscale feature analysis to avoid the problem of sensitivity to deformation. Our proposed CHDNet learns high-level features and provides more classification information during training time, which may result in higher accuracy when predicting testing samples. We tested the proposed method on a set of 267 gastritis patients and a control group of 48 healthy volunteers. Test results show that CHDNet is a promising method in tongue image classification for the TCM study.

  13. Mammographic parenchymal texture as an imaging marker of hormonal activity: a comparative study between pre- and post-menopausal women

    NASA Astrophysics Data System (ADS)

    Daye, Dania; Bobo, Ezra; Baumann, Bethany; Ioannou, Antonios; Conant, Emily F.; Maidment, Andrew D. A.; Kontos, Despina

    2011-03-01

    Mammographic parenchymal texture patterns have been shown to be related to breast cancer risk. Yet, little is known about the biological basis underlying this association. Here, we investigate the potential of mammographic parenchymal texture patterns as an inherent phenotypic imaging marker of endogenous hormonal exposure of the breast tissue. Digital mammographic (DM) images in the cranio-caudal (CC) view of the unaffected breast from 138 women diagnosed with unilateral breast cancer were retrospectively analyzed. Menopause status was used as a surrogate marker of endogenous hormonal activity. Retroareolar 2.5cm2 ROIs were segmented from the post-processed DM images using an automated algorithm. Parenchymal texture features of skewness, coarseness, contrast, energy, homogeneity, grey-level spatial correlation, and fractal dimension were computed. Receiver operating characteristic (ROC) curve analysis was performed to evaluate feature classification performance in distinguishing between 72 pre- and 66 post-menopausal women. Logistic regression was performed to assess the independent effect of each texture feature in predicting menopause status. ROC analysis showed that texture features have inherent capacity to distinguish between pre- and post-menopausal statuses (AUC>0.5, p<0.05). Logistic regression including all texture features yielded an ROC curve with an AUC of 0.76. Addition of age at menarche, ethnicity, contraception use and hormonal replacement therapy (HRT) use lead to a modest model improvement (AUC=0.78) while texture features maintained significant contribution (p<0.05). The observed differences in parenchymal texture features between pre- and post- menopausal women suggest that mammographic texture can potentially serve as a surrogate imaging marker of endogenous hormonal activity.

  14. Quantitative Image Feature Engine (QIFE): an Open-Source, Modular Engine for 3D Quantitative Feature Extraction from Volumetric Medical Images.

    PubMed

    Echegaray, Sebastian; Bakr, Shaimaa; Rubin, Daniel L; Napel, Sandy

    2017-10-06

    The aim of this study was to develop an open-source, modular, locally run or server-based system for 3D radiomics feature computation that can be used on any computer system and included in existing workflows for understanding associations and building predictive models between image features and clinical data, such as survival. The QIFE exploits various levels of parallelization for use on multiprocessor systems. It consists of a managing framework and four stages: input, pre-processing, feature computation, and output. Each stage contains one or more swappable components, allowing run-time customization. We benchmarked the engine using various levels of parallelization on a cohort of CT scans presenting 108 lung tumors. Two versions of the QIFE have been released: (1) the open-source MATLAB code posted to Github, (2) a compiled version loaded in a Docker container, posted to DockerHub, which can be easily deployed on any computer. The QIFE processed 108 objects (tumors) in 2:12 (h/mm) using 1 core, and 1:04 (h/mm) hours using four cores with object-level parallelization. We developed the Quantitative Image Feature Engine (QIFE), an open-source feature-extraction framework that focuses on modularity, standards, parallelism, provenance, and integration. Researchers can easily integrate it with their existing segmentation and imaging workflows by creating input and output components that implement their existing interfaces. Computational efficiency can be improved by parallelizing execution at the cost of memory usage. Different parallelization levels provide different trade-offs, and the optimal setting will depend on the size and composition of the dataset to be processed.

  15. A new approach to develop computer-aided detection schemes of digital mammograms

    NASA Astrophysics Data System (ADS)

    Tan, Maxine; Qian, Wei; Pu, Jiantao; Liu, Hong; Zheng, Bin

    2015-06-01

    The purpose of this study is to develop a new global mammographic image feature analysis based computer-aided detection (CAD) scheme and evaluate its performance in detecting positive screening mammography examinations. A dataset that includes images acquired from 1896 full-field digital mammography (FFDM) screening examinations was used in this study. Among them, 812 cases were positive for cancer and 1084 were negative or benign. After segmenting the breast area, a computerized scheme was applied to compute 92 global mammographic tissue density based features on each of four mammograms of the craniocaudal (CC) and mediolateral oblique (MLO) views. After adding three existing popular risk factors (woman’s age, subjectively rated mammographic density, and family breast cancer history) into the initial feature pool, we applied a sequential forward floating selection feature selection algorithm to select relevant features from the bilateral CC and MLO view images separately. The selected CC and MLO view image features were used to train two artificial neural networks (ANNs). The results were then fused by a third ANN to build a two-stage classifier to predict the likelihood of the FFDM screening examination being positive. CAD performance was tested using a ten-fold cross-validation method. The computed area under the receiver operating characteristic curve was AUC = 0.779   ±   0.025 and the odds ratio monotonically increased from 1 to 31.55 as CAD-generated detection scores increased. The study demonstrated that this new global image feature based CAD scheme had a relatively higher discriminatory power to cue the FFDM examinations with high risk of being positive, which may provide a new CAD-cueing method to assist radiologists in reading and interpreting screening mammograms.

  16. Contrast-enhanced CT features of hepatoblastoma: Can we predict histopathology?

    PubMed

    Baheti, Akshay D; Luana Stanescu, A; Li, Ning; Chapman, Teresa

    Hepatoblastoma is the most common hepatic malignancy occurring in the pediatric population. Intratumoral cellular behavior varies, and the small-cell undifferentiated histopathology carries a poorer prognosis than other tissue subtypes. Neoadjuvant chemotherapy is recommended for this tumor subtype prior to surgical resection in most cases. Early identification of tumors with poor prognosis could have a significant clinical impact. Objective The aim of this work was to identify imaging features of small-cell undifferentiated subtype hepatoblastoma that can help distinguish this subtype from more favorable tumors and potentially guide the clinical management. We also sought to characterize contrast-enhanced CT (CECT) features of hepatoblastoma that correlate with metastatic disease and patient outcome. Our study included 34 patients (24 males, 10 females) with a mean age of 16months (range: 0-46months) with surgically confirmed hepatoblastoma and available baseline abdominal imaging by CECT. Clinical data and CT abdominal images were retrospectively analyzed. Five tumors with small-cell undifferentiated components were identified. All of these tumors demonstrated irregular margins on CT imaging. Advanced PRETEXT stage, vascular invasion and irregular margins were associated with metastatic disease and decreased survival. Capsular retraction was also significantly associated with decreased survival. Irregular tumor margins demonstrated statistically significant association with the presence of small-cell undifferentiated components. No other imaging feature showed statistically significant association. Tumor margin irregularity, vascular invasion, capsular retraction, and PRETEXT stage correlate with worse patient outcomes. Irregular tumor margin was the only imaging feature significantly associated with more aggressive tumor subtype. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. SU-F-R-20: Image Texture Features Correlate with Time to Local Failure in Lung SBRT Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, M; Abazeed, M; Woody, N

    Purpose: To explore possible correlation between CT image-based texture and histogram features and time-to-local-failure in early stage non-small cell lung cancer (NSCLC) patients treated with stereotactic body radiotherapy (SBRT).Methods and Materials: From an IRB-approved lung SBRT registry for patients treated between 2009–2013 we selected 48 (20 male, 28 female) patients with local failure. Median patient age was 72.3±10.3 years. Mean time to local failure was 15 ± 7.1 months. Physician-contoured gross tumor volumes (GTV) on the planning CT images were processed and 3D gray-level co-occurrence matrix (GLCM) based texture and histogram features were calculated in Matlab. Data were exported tomore » R and a multiple linear regression model was used to examine the relationship between texture features and time-to-local-failure. Results: Multiple linear regression revealed that entropy (p=0.0233, multiple R2=0.60) from GLCM-based texture analysis and the standard deviation (p=0.0194, multiple R2=0.60) from the histogram-based features were statistically significantly correlated with the time-to-local-failure. Conclusion: Image-based texture analysis can be used to predict certain aspects of treatment outcomes of NSCLC patients treated with SBRT. We found entropy and standard deviation calculated for the GTV on the CT images displayed a statistically significant correlation with and time-to-local-failure in lung SBRT patients.« less

  18. Machine Learning-based Texture Analysis of Contrast-enhanced MR Imaging to Differentiate between Glioblastoma and Primary Central Nervous System Lymphoma.

    PubMed

    Kunimatsu, Akira; Kunimatsu, Natsuko; Yasaka, Koichiro; Akai, Hiroyuki; Kamiya, Kouhei; Watadani, Takeyuki; Mori, Harushi; Abe, Osamu

    2018-05-16

    Although advanced MRI techniques are increasingly available, imaging differentiation between glioblastoma and primary central nervous system lymphoma (PCNSL) is sometimes confusing. We aimed to evaluate the performance of image classification by support vector machine, a method of traditional machine learning, using texture features computed from contrast-enhanced T 1 -weighted images. This retrospective study on preoperative brain tumor MRI included 76 consecutives, initially treated patients with glioblastoma (n = 55) or PCNSL (n = 21) from one institution, consisting of independent training group (n = 60: 44 glioblastomas and 16 PCNSLs) and test group (n = 16: 11 glioblastomas and 5 PCNSLs) sequentially separated by time periods. A total set of 67 texture features was computed on routine contrast-enhanced T 1 -weighted images of the training group, and the top four most discriminating features were selected as input variables to train support vector machine classifiers. These features were then evaluated on the test group with subsequent image classification. The area under the receiver operating characteristic curves on the training data was calculated at 0.99 (95% confidence interval [CI]: 0.96-1.00) for the classifier with a Gaussian kernel and 0.87 (95% CI: 0.77-0.95) for the classifier with a linear kernel. On the test data, both of the classifiers showed prediction accuracy of 75% (12/16) of the test images. Although further improvement is needed, our preliminary results suggest that machine learning-based image classification may provide complementary diagnostic information on routine brain MRI.

  19. Integration of co-localized glandular morphometry and protein biomarker expression in immunofluorescent images for prostate cancer prognosis

    NASA Astrophysics Data System (ADS)

    Scott, Richard; Khan, Faisal M.; Zeineh, Jack; Donovan, Michael; Fernandez, Gerardo

    2015-03-01

    Immunofluorescent (IF) image analysis of tissue pathology has proven to be extremely valuable and robust in developing prognostic assessments of disease, particularly in prostate cancer. There have been significant advances in the literature in quantitative biomarker expression as well as characterization of glandular architectures in discrete gland rings. However, while biomarker and glandular morphometric features have been combined as separate predictors in multivariate models, there is a lack of integrative features for biomarkers co-localized within specific morphological sub-types; for example the evaluation of androgen receptor (AR) expression within Gleason 3 glands only. In this work we propose a novel framework employing multiple techniques to generate integrated metrics of morphology and biomarker expression. We demonstrate the utility of the approaches in predicting clinical disease progression in images from 326 prostate biopsies and 373 prostatectomies. Our proposed integrative approaches yield significant improvements over existing IF image feature metrics. This work presents some of the first algorithms for generating innovative characteristics in tissue diagnostics that integrate co-localized morphometry and protein biomarker expression.

  20. BP network for atorvastatin effect evaluation from ultrasound images features classification

    NASA Astrophysics Data System (ADS)

    Fang, Mengjie; Yang, Xin; Liu, Yang; Xu, Hongwei; Liang, Huageng; Wang, Yujie; Ding, Mingyue

    2013-10-01

    Atherosclerotic lesions at the carotid artery are a major cause of emboli or atheromatous debris, resulting in approximately 88% of ischemic strokes in the USA in 2006. Stroke is becoming the most common cause of death worldwide, although patient management and prevention strategies have reduced stroke rate considerably over the past decades. Many research studies have been carried out on how to quantitatively evaluate local arterial effects for potential carotid disease treatments. As an inexpensive, convenient and fast means of detection, ultrasonic medical testing has been widespread in the world, so it is very practical to use ultrasound technology in the prevention and treatment of carotid atherosclerosis. This paper is dedicated to this field. Currently, many ultrasound image characteristics on carotid plaque have been proposed. After screening a large number of features (including 26 morphological and 85 texture features), we have got six shape characteristics and six texture characteristics in the combination. In order to test the validity and accuracy of these combined features, we have established a Back-Propagation (BP) neural network to classify atherosclerosis plaques between atorvastatin group and placebo group. The leave-one-case-out protocol was utilized on a database of 768 carotid ultrasound images of 12 patients (5 subjects of placebo group and 7 subjects of atorvastatin group) for the evaluation. The classification results showed that the combined features and classification have good recognition ability, with the overall accuracy 83.93%, sensitivity 82.14%, specificity 85.20%, positive predictive value 79.86%, negative predictive value 86.98%, Matthew's correlation coefficient 67.08%, and Youden's index 67.34%. And the receiver operating characteristic (ROC) curve in our test also performed well.

  1. Association between pathology and texture features of multi parametric MRI of the prostate

    NASA Astrophysics Data System (ADS)

    Kuess, Peter; Andrzejewski, Piotr; Nilsson, David; Georg, Petra; Knoth, Johannes; Susani, Martin; Trygg, Johan; Helbich, Thomas H.; Polanec, Stephan H.; Georg, Dietmar; Nyholm, Tufve

    2017-10-01

    The role of multi-parametric (mp)MRI in the diagnosis and treatment of prostate cancer has increased considerably. An alternative to visual inspection of mpMRI is the evaluation using histogram-based (first order statistics) parameters and textural features (second order statistics). The aims of the present work were to investigate the relationship between benign and malignant sub-volumes of the prostate and textures obtained from mpMR images. The performance of tumor prediction was investigated based on the combination of histogram-based and textural parameters. Subsequently, the relative importance of mpMR images was assessed and the benefit of additional imaging analyzed. Finally, sub-structures based on the PI-RADS classification were investigated as potential regions to automatically detect maligned lesions. Twenty-five patients who received mpMRI prior to radical prostatectomy were included in the study. The imaging protocol included T2, DWI, and DCE. Delineation of tumor regions was performed based on pathological information. First and second order statistics were derived from each structure and for all image modalities. The resulting data were processed with multivariate analysis, using PCA (principal component analysis) and OPLS-DA (orthogonal partial least squares discriminant analysis) for separation of malignant and healthy tissue. PCA showed a clear difference between tumor and healthy regions in the peripheral zone for all investigated images. The predictive ability of the OPLS-DA models increased for all image modalities when first and second order statistics were combined. The predictive value reached a plateau after adding ADC and T2, and did not increase further with the addition of other image information. The present study indicates a distinct difference in the signatures between malign and benign prostate tissue. This is an absolute prerequisite for automatic tumor segmentation, but only the first step in that direction. For the specific identified signature, DCE did not add complementary information to T2 and ADC maps.

  2. Automatic Scoring of Multiple Semantic Attributes With Multi-Task Feature Leverage: A Study on Pulmonary Nodules in CT Images.

    PubMed

    Sihong Chen; Jing Qin; Xing Ji; Baiying Lei; Tianfu Wang; Dong Ni; Jie-Zhi Cheng

    2017-03-01

    The gap between the computational and semantic features is the one of major factors that bottlenecks the computer-aided diagnosis (CAD) performance from clinical usage. To bridge this gap, we exploit three multi-task learning (MTL) schemes to leverage heterogeneous computational features derived from deep learning models of stacked denoising autoencoder (SDAE) and convolutional neural network (CNN), as well as hand-crafted Haar-like and HoG features, for the description of 9 semantic features for lung nodules in CT images. We regard that there may exist relations among the semantic features of "spiculation", "texture", "margin", etc., that can be explored with the MTL. The Lung Image Database Consortium (LIDC) data is adopted in this study for the rich annotation resources. The LIDC nodules were quantitatively scored w.r.t. 9 semantic features from 12 radiologists of several institutes in U.S.A. By treating each semantic feature as an individual task, the MTL schemes select and map the heterogeneous computational features toward the radiologists' ratings with cross validation evaluation schemes on the randomly selected 2400 nodules from the LIDC dataset. The experimental results suggest that the predicted semantic scores from the three MTL schemes are closer to the radiologists' ratings than the scores from single-task LASSO and elastic net regression methods. The proposed semantic attribute scoring scheme may provide richer quantitative assessments of nodules for better support of diagnostic decision and management. Meanwhile, the capability of the automatic association of medical image contents with the clinical semantic terms by our method may also assist the development of medical search engine.

  3. Lumen-based detection of prostate cancer via convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Kwak, Jin Tae; Hewitt, Stephen M.

    2017-03-01

    We present a deep learning approach for detecting prostate cancers. The approach consists of two steps. In the first step, we perform tissue segmentation that identifies lumens within digitized prostate tissue specimen images. Intensity- and texture-based image features are computed at five different scales, and a multiview boosting method is adopted to cooperatively combine the image features from differing scales and to identify lumens. In the second step, we utilize convolutional neural networks (CNN) to automatically extract high-level image features of lumens and to predict cancers. The segmented lumens are rescaled to reduce computational complexity and data augmentation by scaling, rotating, and flipping the rescaled image is applied to avoid overfitting. We evaluate the proposed method using two tissue microarrays (TMA) - TMA1 includes 162 tissue specimens (73 Benign and 89 Cancer) and TMA2 comprises 185 tissue specimens (70 Benign and 115 Cancer). In cross-validation on TMA1, the proposed method achieved an AUC of 0.95 (CI: 0.93-0.98). Trained on TMA1 and tested on TMA2, CNN obtained an AUC of 0.95 (CI: 0.92-0.98). This demonstrates that the proposed method can potentially improve prostate cancer pathology.

  4. Multiple-camera/motion stereoscopy for range estimation in helicopter flight

    NASA Technical Reports Server (NTRS)

    Smith, Phillip N.; Sridhar, Banavar; Suorsa, Raymond E.

    1993-01-01

    Aiding the pilot to improve safety and reduce pilot workload by detecting obstacles and planning obstacle-free flight paths during low-altitude helicopter flight is desirable. Computer vision techniques provide an attractive method of obstacle detection and range estimation for objects within a large field of view ahead of the helicopter. Previous research has had considerable success by using an image sequence from a single moving camera to solving this problem. The major limitations of single camera approaches are that no range information can be obtained near the instantaneous direction of motion or in the absence of motion. These limitations can be overcome through the use of multiple cameras. This paper presents a hybrid motion/stereo algorithm which allows range refinement through recursive range estimation while avoiding loss of range information in the direction of travel. A feature-based approach is used to track objects between image frames. An extended Kalman filter combines knowledge of the camera motion and measurements of a feature's image location to recursively estimate the feature's range and to predict its location in future images. Performance of the algorithm will be illustrated using an image sequence, motion information, and independent range measurements from a low-altitude helicopter flight experiment.

  5. Design Architectures for Optically Multiplexed Imaging

    DTIC Science & Technology

    2015-09-16

    which single task is the highest priority task ∗ according to Equation 16. In es- sence , the task that is most often predicted to be of the...deployment (or a null deployment from inaction), our features consisted of pairwise relationships between each placed decoy and each missile. For each...de- coy/missile pairing, we have features describing whether a decoy had been placed such that the missile would be suc- cessfully distracted by

  6. On combining image-based and ontological semantic dissimilarities for medical image retrieval applications

    PubMed Central

    Kurtz, Camille; Depeursinge, Adrien; Napel, Sandy; Beaulieu, Christopher F.; Rubin, Daniel L.

    2014-01-01

    Computer-assisted image retrieval applications can assist radiologists by identifying similar images in archives as a means to providing decision support. In the classical case, images are described using low-level features extracted from their contents, and an appropriate distance is used to find the best matches in the feature space. However, using low-level image features to fully capture the visual appearance of diseases is challenging and the semantic gap between these features and the high-level visual concepts in radiology may impair the system performance. To deal with this issue, the use of semantic terms to provide high-level descriptions of radiological image contents has recently been advocated. Nevertheless, most of the existing semantic image retrieval strategies are limited by two factors: they require manual annotation of the images using semantic terms and they ignore the intrinsic visual and semantic relationships between these annotations during the comparison of the images. Based on these considerations, we propose an image retrieval framework based on semantic features that relies on two main strategies: (1) automatic “soft” prediction of ontological terms that describe the image contents from multi-scale Riesz wavelets and (2) retrieval of similar images by evaluating the similarity between their annotations using a new term dissimilarity measure, which takes into account both image-based and ontological term relations. The combination of these strategies provides a means of accurately retrieving similar images in databases based on image annotations and can be considered as a potential solution to the semantic gap problem. We validated this approach in the context of the retrieval of liver lesions from computed tomographic (CT) images and annotated with semantic terms of the RadLex ontology. The relevance of the retrieval results was assessed using two protocols: evaluation relative to a dissimilarity reference standard defined for pairs of images on a 25-images dataset, and evaluation relative to the diagnoses of the retrieved images on a 72-images dataset. A normalized discounted cumulative gain (NDCG) score of more than 0.92 was obtained with the first protocol, while AUC scores of more than 0.77 were obtained with the second protocol. This automatical approach could provide real-time decision support to radiologists by showing them similar images with associated diagnoses and, where available, responses to therapies. PMID:25036769

  7. Recursive feature elimination for biomarker discovery in resting-state functional connectivity.

    PubMed

    Ravishankar, Hariharan; Madhavan, Radhika; Mullick, Rakesh; Shetty, Teena; Marinelli, Luca; Joel, Suresh E

    2016-08-01

    Biomarker discovery involves finding correlations between features and clinical symptoms to aid clinical decision. This task is especially difficult in resting state functional magnetic resonance imaging (rs-fMRI) data due to low SNR, high-dimensionality of images, inter-subject and intra-subject variability and small numbers of subjects compared to the number of derived features. Traditional univariate analysis suffers from the problem of multiple comparisons. Here, we adopt an alternative data-driven method for identifying population differences in functional connectivity. We propose a machine-learning approach to down-select functional connectivity features associated with symptom severity in mild traumatic brain injury (mTBI). Using this approach, we identified functional regions with altered connectivity in mTBI. including the executive control, visual and precuneus networks. We compared functional connections at multiple resolutions to determine which scale would be more sensitive to changes related to patient recovery. These modular network-level features can be used as diagnostic tools for predicting disease severity and recovery profiles.

  8. Exploiting Surroundedness for Saliency Detection: A Boolean Map Approach.

    PubMed

    Zhang, Jianming; Sclaroff, Stan

    2016-05-01

    We demonstrate the usefulness of surroundedness for eye fixation prediction by proposing a Boolean Map based Saliency model (BMS). In our formulation, an image is characterized by a set of binary images, which are generated by randomly thresholding the image's feature maps in a whitened feature space. Based on a Gestalt principle of figure-ground segregation, BMS computes a saliency map by discovering surrounded regions via topological analysis of Boolean maps. Furthermore, we draw a connection between BMS and the Minimum Barrier Distance to provide insight into why and how BMS can properly captures the surroundedness cue via Boolean maps. The strength of BMS is verified by its simplicity, efficiency and superior performance compared with 10 state-of-the-art methods on seven eye tracking benchmark datasets.

  9. Magnetic Resonance Imaging Assessment of Squamous Cell Carcinoma of the Anal Canal Before and After Chemoradiation: Can MRI Predict for Eventual Clinical Outcome?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goh, Vicky, E-mail: vicky.goh@stricklandscanner.org.u; Gollub, Frank K.; Liaw, Jonathan

    2010-11-01

    Purpose: To describe the MRI appearances of squamous cell carcinoma of the anal canal before and after chemoradiation and to assess whether MRI features predict for clinical outcome. Methods and Materials: Thirty-five patients (15 male, 20 female; mean age 60.8 years) with histologically proven squamous cell cancer of the anal canal underwent MRI before and 6-8 weeks after definitive chemoradiation. Images were reviewed retrospectively by two radiologists in consensus blinded to clinical outcome: tumor size, signal intensity, extent, and TNM stage were recorded. Following treatment, patients were defined as responders by T and N downstaging and Response Evaluation Criteria inmore » Solid Tumors (RECIST). Final clinical outcome was determined by imaging and case note review: patients were divided into (1) disease-free and (2) with relapse and compared using appropriate univariate methods to identify imaging predictors; statistical significance was at 5%. Results: The majority of tumors were {<=}T2 (23/35; 65.7%) and N0 (21/35; 60%), mean size 3.75cm, and hyperintense (++ to +++, 24/35 patients; 68%). Following chemoradiation, there was a size reduction in all cases (mean 73.3%) and a reduction in signal intensity in 26/35 patients (74.2%). The majority of patients were classified as responders (26/35 (74.2%) patients by T and N downstaging; and 30/35 (85.7%) patients by RECIST). At a median follow-up of 33.5 months, 25 patients (71.4%) remained disease-free; 10 patients (28.6%) had locoregional or metastatic disease. Univariate analysis showed that no individual MRI features were predictive of eventual outcome. Conclusion: Early assessment of response by MRI at 6-8 weeks is unhelpful in predicting future clinical outcome.« less

  10. Integration of Multi-Modal Biomedical Data to Predict Cancer Grade and Patient Survival.

    PubMed

    Phan, John H; Hoffman, Ryan; Kothari, Sonal; Wu, Po-Yen; Wang, May D

    2016-02-01

    The Big Data era in Biomedical research has resulted in large-cohort data repositories such as The Cancer Genome Atlas (TCGA). These repositories routinely contain hundreds of matched patient samples for genomic, proteomic, imaging, and clinical data modalities, enabling holistic and multi-modal integrative analysis of human disease. Using TCGA renal and ovarian cancer data, we conducted a novel investigation of multi-modal data integration by combining histopathological image and RNA-seq data. We compared the performances of two integrative prediction methods: majority vote and stacked generalization. Results indicate that integration of multiple data modalities improves prediction of cancer grade and outcome. Specifically, stacked generalization, a method that integrates multiple data modalities to produce a single prediction result, outperforms both single-data-modality prediction and majority vote. Moreover, stacked generalization reveals the contribution of each data modality (and specific features within each data modality) to the final prediction result and may provide biological insights to explain prediction performance.

  11. Improved characterization of molecular phenotypes in breast lesions using 18F-FDG PET image homogeneity

    NASA Astrophysics Data System (ADS)

    Cao, Kunlin; Bhagalia, Roshni; Sood, Anup; Brogi, Edi; Mellinghoff, Ingo K.; Larson, Steven M.

    2015-03-01

    Positron emission tomography (PET) using uorodeoxyglucose (18F-FDG) is commonly used in the assessment of breast lesions by computing voxel-wise standardized uptake value (SUV) maps. Simple metrics derived from ensemble properties of SUVs within each identified breast lesion are routinely used for disease diagnosis. The maximum SUV within the lesion (SUVmax) is the most popular of these metrics. However these simple metrics are known to be error-prone and are susceptible to image noise. Finding reliable SUV map-based features that correlate to established molecular phenotypes of breast cancer (viz. estrogen receptor (ER), progesterone receptor (PR) and human epidermal growth factor receptor 2 (HER2) expression) will enable non-invasive disease management. This study investigated 36 SUV features based on first and second order statistics, local histograms and texture of segmented lesions to predict ER and PR expression in 51 breast cancer patients. True ER and PR expression was obtained via immunohistochemistry (IHC) of tissue samples from each lesion. A supervised learning, adaptive boosting-support vector machine (AdaBoost-SVM), framework was used to select a subset of features to classify breast lesions into distinct phenotypes. Performance of the trained multi-feature classifier was compared against the baseline single-feature SUVmax classifier using receiver operating characteristic (ROC) curves. Results show that texture features encoding local lesion homogeneity extracted from gray-level co-occurrence matrices are the strongest discriminator of lesion ER expression. In particular, classifiers including these features increased prediction accuracy from 0.75 (baseline) to 0.82 and the area under the ROC curve from 0.64 (baseline) to 0.75.

  12. Fronto-Temporal Connectivity Predicts ECT Outcome in Major Depression.

    PubMed

    Leaver, Amber M; Wade, Benjamin; Vasavada, Megha; Hellemann, Gerhard; Joshi, Shantanu H; Espinoza, Randall; Narr, Katherine L

    2018-01-01

    Electroconvulsive therapy (ECT) is arguably the most effective available treatment for severe depression. Recent studies have used MRI data to predict clinical outcome to ECT and other antidepressant therapies. One challenge facing such studies is selecting from among the many available metrics, which characterize complementary and sometimes non-overlapping aspects of brain function and connectomics. Here, we assessed the ability of aggregated, functional MRI metrics of basal brain activity and connectivity to predict antidepressant response to ECT using machine learning. A radial support vector machine was trained using arterial spin labeling (ASL) and blood-oxygen-level-dependent (BOLD) functional magnetic resonance imaging (fMRI) metrics from n = 46 (26 female, mean age 42) depressed patients prior to ECT (majority right-unilateral stimulation). Image preprocessing was applied using standard procedures, and metrics included cerebral blood flow in ASL, and regional homogeneity, fractional amplitude of low-frequency modulations, and graph theory metrics (strength, local efficiency, and clustering) in BOLD data. A 5-repeated 5-fold cross-validation procedure with nested feature-selection validated model performance. Linear regressions were applied post hoc to aid interpretation of discriminative features. The range of balanced accuracy in models performing statistically above chance was 58-68%. Here, prediction of non-responders was slightly higher than for responders (maximum performance 74 and 64%, respectively). Several features were consistently selected across cross-validation folds, mostly within frontal and temporal regions. Among these were connectivity strength among: a fronto-parietal network [including left dorsolateral prefrontal cortex (DLPFC)], motor and temporal networks (near ECT electrodes), and/or subgenual anterior cingulate cortex (sgACC). Our data indicate that pattern classification of multimodal fMRI metrics can successfully predict ECT outcome, particularly for individuals who will not respond to treatment. Notably, connectivity with networks highly relevant to ECT and depression were consistently selected as important predictive features. These included the left DLPFC and the sgACC, which are both targets of other neurostimulation therapies for depression, as well as connectivity between motor and right temporal cortices near electrode sites. Future studies that probe additional functional and structural MRI metrics and other patient characteristics may further improve the predictive power of these and similar models.

  13. Predicting individualized clinical measures by a generalized prediction framework and multimodal fusion of MRI data

    PubMed Central

    Meng, Xing; Jiang, Rongtao; Lin, Dongdong; Bustillo, Juan; Jones, Thomas; Chen, Jiayu; Yu, Qingbao; Du, Yuhui; Zhang, Yu; Jiang, Tianzi; Sui, Jing; Calhoun, Vince D.

    2016-01-01

    Neuroimaging techniques have greatly enhanced the understanding of neurodiversity (human brain variation across individuals) in both health and disease. The ultimate goal of using brain imaging biomarkers is to perform individualized predictions. Here we proposed a generalized framework that can predict explicit values of the targeted measures by taking advantage of joint information from multiple modalities. This framework also enables whole brain voxel-wise searching by combining multivariate techniques such as ReliefF, clustering, correlation-based feature selection and multiple regression models, which is more flexible and can achieve better prediction performance than alternative atlas-based methods. For 50 healthy controls and 47 schizophrenia patients, three kinds of features derived from resting-state fMRI (fALFF), sMRI (gray matter) and DTI (fractional anisotropy) were extracted and fed into a regression model, achieving high prediction for both cognitive scores (MCCB composite r = 0.7033, MCCB social cognition r = 0.7084) and symptomatic scores (positive and negative syndrome scale [PANSS] positive r = 0.7785, PANSS negative r = 0.7804). Moreover, the brain areas likely responsible for cognitive deficits of schizophrenia, including middle temporal gyrus, dorsolateral prefrontal cortex, striatum, cuneus and cerebellum, were located with different weights, as well as regions predicting PANSS symptoms, including thalamus, striatum and inferior parietal lobule, pinpointing the potential neuromarkers. Finally, compared to a single modality, multimodal combination achieves higher prediction accuracy and enables individualized prediction on multiple clinical measures. There is more work to be done, but the current results highlight the potential utility of multimodal brain imaging biomarkers to eventually inform clinical decision-making. PMID:27177764

  14. A general prediction model for the detection of ADHD and Autism using structural and functional MRI.

    PubMed

    Sen, Bhaskar; Borle, Neil C; Greiner, Russell; Brown, Matthew R G

    2018-01-01

    This work presents a novel method for learning a model that can diagnose Attention Deficit Hyperactivity Disorder (ADHD), as well as Autism, using structural texture and functional connectivity features obtained from 3-dimensional structural magnetic resonance imaging (MRI) and 4-dimensional resting-state functional magnetic resonance imaging (fMRI) scans of subjects. We explore a series of three learners: (1) The LeFMS learner first extracts features from the structural MRI images using the texture-based filters produced by a sparse autoencoder. These filters are then convolved with the original MRI image using an unsupervised convolutional network. The resulting features are used as input to a linear support vector machine (SVM) classifier. (2) The LeFMF learner produces a diagnostic model by first computing spatial non-stationary independent components of the fMRI scans, which it uses to decompose each subject's fMRI scan into the time courses of these common spatial components. These features can then be used with a learner by themselves or in combination with other features to produce the model. Regardless of which approach is used, the final set of features are input to a linear support vector machine (SVM) classifier. (3) Finally, the overall LeFMSF learner uses the combined features obtained from the two feature extraction processes in (1) and (2) above as input to an SVM classifier, achieving an accuracy of 0.673 on the ADHD-200 holdout data and 0.643 on the ABIDE holdout data. Both of these results, obtained with the same LeFMSF framework, are the best known, over all hold-out accuracies on these datasets when only using imaging data-exceeding previously-published results by 0.012 for ADHD and 0.042 for Autism. Our results show that combining multi-modal features can yield good classification accuracy for diagnosis of ADHD and Autism, which is an important step towards computer-aided diagnosis of these psychiatric diseases and perhaps others as well.

  15. Automated discovery of structural features of the optic nerve head on the basis of image and genetic data

    NASA Astrophysics Data System (ADS)

    Christopher, Mark; Tang, Li; Fingert, John H.; Scheetz, Todd E.; Abramoff, Michael D.

    2014-03-01

    Evaluation of optic nerve head (ONH) structure is a commonly used clinical technique for both diagnosis and monitoring of glaucoma. Glaucoma is associated with characteristic changes in the structure of the ONH. We present a method for computationally identifying ONH structural features using both imaging and genetic data from a large cohort of participants at risk for primary open angle glaucoma (POAG). Using 1054 participants from the Ocular Hypertension Treatment Study, ONH structure was measured by application of a stereo correspondence algorithm to stereo fundus images. In addition, the genotypes of several known POAG genetic risk factors were considered for each participant. ONH structural features were discovered using both a principal component analysis approach to identify the major modes of variance within structural measurements and a linear discriminant analysis approach to capture the relationship between genetic risk factors and ONH structure. The identified ONH structural features were evaluated based on the strength of their associations with genotype and development of POAG by the end of the OHTS study. ONH structural features with strong associations with genotype were identified for each of the genetic loci considered. Several identified ONH structural features were significantly associated (p < 0.05) with the development of POAG after Bonferroni correction. Further, incorporation of genetic risk status was found to substantially increase performance of early POAG prediction. These results suggest incorporating both imaging and genetic data into ONH structural modeling significantly improves the ability to explain POAG-related changes to ONH structure.

  16. Eigenanatomy on Fractional Anisotropy Imaging Provides White Matter Anatomical Features Discriminating Between Alzheimer's Disease and Late Onset Bipolar Disorder.

    PubMed

    Besga, Ariadna; Chyzhyk, Darya; González-Ortega, Itxaso; Savio, Alexandre; Ayerdi, Borja; Echeveste, Jon; Graña, Manuel; González-Pinto, Ana

    2016-01-01

    Late Onset Bipolar Disorder (LOBD) is the arousal of Bipolar Disorder (BD) at old age (>60) without any previous history of disorders. LOBD is often difficult to distinguish from degenerative dementias, such as Alzheimer Disease (AD), due to comorbidities and common cognitive symptoms. Moreover, LOBD prevalence is increasing due to population aging. Biomarkers extracted from blood plasma are not discriminant because both pathologies share pathophysiological features related to neuroinflammation, therefore we look for anatomical features highly correlated with blood biomarkers that allow accurate diagnosis prediction. This may shed some light on the basic biological mechanisms leading to one or another disease. Moreover, accurate diagnosis is needed to select the best personalized treatment. We look for white matter features which are correlated with blood plasma biomarkers (inflammatory and neurotrophic) discriminating LOBD from AD. A sample of healthy controls (HC) (n=19), AD patients (n=35), and BD patients (n=24) has been recruited at the Alava University Hospital. Plasma biomarkers have been obtained at recruitment time. Diffusion weighted (DWI) magnetic resonance imaging (MRI) are obtained for each subject. DWI is preprocessed to obtain diffusion tensor imaging (DTI) data, which is reduced to fractional anisotropy (FA) data. In the selection phase, eigenanatomy finds FA eigenvolumes maximally correlated with plasma biomarkers by partial sparse canonical correlation analysis (PSCCAN). In the analysis phase, we take the eigenvolume projection coefficients as the classification features, carrying out cross-validation of support vector machine (SVM) to obtain discrimination power of each biomarker effects. The John Hopkins Universtiy white matter atlas is used to provide anatomical localizations of the detected feature clusters. Classification results show that one specific biomarker of oxidative stress (malondialdehyde MDA) gives the best classification performance ( accuracy 85%, F-score 86%, sensitivity, and specificity 87%, ) in the discrimination of AD and LOBD. Discriminating features appear to be localized in the posterior limb of the internal capsule and superior corona radiata. It is feasible to support contrast diagnosis among LOBD and AD by means of predictive classifiers based on eigenanatomy features computed from FA imaging correlated to plasma biomarkers. In addition, white matter eigenanatomy localizations offer some new avenues to assess the differential pathophysiology of LOBD and AD.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuo, J; Su, K; Department of Radiology, University Hospitals Case Medical Center, Case Western Reserve University, Cleveland, Ohio

    Purpose: Accurate and robust photon attenuation derived from MR is essential for PET/MR and MR-based radiation treatment planning applications. Although the fuzzy C-means (FCM) algorithm has been applied for pseudo-CT generation, the input feature combination and the number of clusters have not been optimized. This study aims to optimize both for clinically practical pseudo-CT generation. Methods: Nine volunteers were recruited. A 190-second, single-acquisition UTE-mDixon with 25% (angular) sampling and 3D radial readout was performed to acquire three primitive MR features at TEs of 0.1, 1.5, and 2.8 ms: the free-induction-decay (FID), the first and the second echo images. Three derivedmore » images, Dixon-fat and Dixon-water generated by two-point Dixon water/fat separation, and R2* (1/T2*) map, were also created. To identify informative inputs for generating a pseudo-CT image volume, all 63 combinations, choosing one to six of the feature images, were used as inputs to FCM for pseudo-CT generation. Further, the number of clusters was varied from four to seven to find the optimal approach. Mean prediction deviation (MPD), mean absolute prediction deviation (MAPD), and correlation coefficient (R) of different combinations were compared for feature selection. Results: Among the 63 feature combinations, the four that resulted in the best MAPD and R were further compared along with the set containing all six features. The results suggested that R2* and Dixon-water are the most informative features. Further, including FID also improved the performance of pseudo-CT generation. Consequently, the set containing FID, Dixon-water, and R2* resulted in the most accurate, robust pseudo-CT when the number of cluster equals to five (5C). The clusters were interpreted as air, fat, bone, brain, and fluid. The six-cluster Result additionally included bone marrow. Conclusion: The results suggested that FID, Dixon-water, R2* are the most important features. The findings can be used to facilitate pseudo-CT generation for unsupervised clustering. Please note that the project was completed with partial funding from the Ohio Department of Development grant TECH 11-063 and a sponsored research agreement with Philips Healthcare that is managed by Case Western Reserve University. As noted in the affiliations, some of the authors are Philips employees.« less

  18. CT texture features of liver parenchyma for predicting development of metastatic disease and overall survival in patients with colorectal cancer.

    PubMed

    Lee, Scott J; Zea, Ryan; Kim, David H; Lubner, Meghan G; Deming, Dustin A; Pickhardt, Perry J

    2018-04-01

    To determine if identifiable hepatic textural features are present at abdominal CT in patients with colorectal cancer (CRC) prior to the development of CT-detectable hepatic metastases. Four filtration-histogram texture features (standard deviation, skewness, entropy and kurtosis) were extracted from the liver parenchyma on portal venous phase CT images at staging and post-treatment surveillance. Surveillance scans corresponded to the last scan prior to the development of CT-detectable CRC liver metastases in 29 patients (median time interval, 6 months), and these were compared with interval-matched surveillance scans in 60 CRC patients who did not develop liver metastases. Predictive models of liver metastasis-free survival and overall survival were built using regularised Cox proportional hazards regression. Texture features did not significantly differ between cases and controls. For Cox models using all features as predictors, all coefficients were shrunk to zero, suggesting no association between any CT texture features and outcomes. Prognostic indices derived from entropy features at surveillance CT incorrectly classified patients into risk groups for future liver metastases (p < 0.001). On surveillance CT scans immediately prior to the development of CRC liver metastases, we found no evidence suggesting that changes in identifiable hepatic texture features were predictive of their development. • No correlation between liver texture features and metastasis-free survival was observed. • Liver texture features incorrectly classified patients into risk groups for liver metastases. • Standardised texture analysis workflows need to be developed to improve research reproducibility.

  19. GyneScan

    PubMed Central

    Acharya, U. Rajendra; Sree, S. Vinitha; Kulshreshtha, Sanjeev; Molinari, Filippo; Koh, Joel En Wei; Saba, Luca; Suri, Jasjit S.

    2014-01-01

    Ovarian cancer is the fifth highest cause of cancer in women and the leading cause of death from gynecological cancers. Accurate diagnosis of ovarian cancer from acquired images is dependent on the expertise and experience of ultrasonographers or physicians, and is therefore, associated with inter observer variabilities. Computer Aided Diagnostic (CAD) techniques use a number of different data mining techniques to automatically predict the presence or absence of cancer, and therefore, are more reliable and accurate. A review of published literature in the field of CAD based ovarian cancer detection indicates that many studies use ultrasound images as the base for analysis. The key objective of this work is to propose an effective adjunct CAD technique called GyneScan for ovarian tumor detection in ultrasound images. In our proposed data mining framework, we extract several texture features based on first order statistics, Gray Level Co-occurrence Matrix and run length matrix. The significant features selected using t-test are then used to train and test several supervised learning based classifiers such as Probabilistic Neural Networks (PNN), Support Vector Machine (SVM), Decision Tree (DT), k-Nearest Neighbor (KNN), and Naïve Bayes (NB). We evaluated the developed framework using 1300 benign and 1300 malignant images. Using 11 significant features in KNN/PNN classifiers, we were able to achieve 100% classification accuracy, sensitivity, specificity, and positive predictive value in detecting ovarian tumor. Even though more validation using larger databases would better establish the robustness of our technique, the preliminary results are promising. This technique could be used as a reliable adjunct method to existing imaging modalities to provide a more confident second opinion on the presence/absence of ovarian tumor. PMID:24325128

  20. A new method using multiphoton imaging and morphometric analysis for differentiating chromophobe renal cell carcinoma and oncocytoma kidney tumors

    NASA Astrophysics Data System (ADS)

    Wu, Binlin; Mukherjee, Sushmita; Jain, Manu

    2016-03-01

    Distinguishing chromophobe renal cell carcinoma (chRCC) from oncocytoma on hematoxylin and eosin images may be difficult and require time-consuming ancillary procedures. Multiphoton microscopy (MPM), an optical imaging modality, was used to rapidly generate sub-cellular histological resolution images from formalin-fixed unstained tissue sections from chRCC and oncocytoma.Tissues were excited using 780nm wavelength and emission signals (including second harmonic generation and autofluorescence) were collected in different channels between 390 nm and 650 nm. Granular structure in the cell cytoplasm was observed in both chRCC and oncocytoma. Quantitative morphometric analysis was conducted to distinguish chRCC and oncocytoma. To perform the analysis, cytoplasm and granules in tumor cells were segmented from the images. Their area and fluorescence intensity were found in different channels. Multiple features were measured to quantify the morphological and fluorescence properties. Linear support vector machine (SVM) was used for classification. Re-substitution validation, cross validation and receiver operating characteristic (ROC) curve were implemented to evaluate the efficacy of the SVM classifier. A wrapper feature algorithm was used to select the optimal features which provided the best predictive performance in separating the two tissue types (classes). Statistical measures such as sensitivity, specificity, accuracy and area under curve (AUC) of ROC were calculated to evaluate the efficacy of the classification. Over 80% accuracy was achieved as the predictive performance. This method, if validated on a larger and more diverse sample set, may serve as an automated rapid diagnostic tool to differentiate between chRCC and oncocytoma. An advantage of such automated methods are that they are free from investigator bias and variability.

  1. Phenotypic feature quantification of patient derived 3D cancer spheroids in fluorescence microscopy image

    NASA Astrophysics Data System (ADS)

    Kang, Mi-Sun; Rhee, Seon-Min; Seo, Ji-Hyun; Kim, Myoung-Hee

    2017-03-01

    Patients' responses to a drug differ at the cellular level. Here, we present an image-based cell phenotypic feature quantification method for predicting the responses of patient-derived glioblastoma cells to a particular drug. We used high-content imaging to understand the features of patient-derived cancer cells. A 3D spheroid culture formation resembles the in vivo environment more closely than 2D adherent cultures do, and it allows for the observation of cellular aggregate characteristics. However, cell analysis at the individual level is more challenging. In this paper, we demonstrate image-based phenotypic screening of the nuclei of patient-derived cancer cells. We first stitched the images of each well of the 384-well plate with the same state. We then used intensity information to detect the colonies. The nuclear intensity and morphological characteristics were used for the segmentation of individual nuclei. Next, we calculated the position of each nucleus that is appeal of the spatial pattern of cells in the well environment. Finally, we compared the results obtained using 3D spheroid culture cells with those obtained using 2D adherent culture cells from the same patient being treated with the same drugs. This technique could be applied for image-based phenotypic screening of cells to determine the patient's response to the drug.

  2. Learning-based saliency model with depth information.

    PubMed

    Ma, Chih-Yao; Hang, Hsueh-Ming

    2015-01-01

    Most previous studies on visual saliency focused on two-dimensional (2D) scenes. Due to the rapidly growing three-dimensional (3D) video applications, it is very desirable to know how depth information affects human visual attention. In this study, we first conducted eye-fixation experiments on 3D images. Our fixation data set comprises 475 3D images and 16 subjects. We used a Tobii TX300 eye tracker (Tobii, Stockholm, Sweden) to track the eye movement of each subject. In addition, this database contains 475 computed depth maps. Due to the scarcity of public-domain 3D fixation data, this data set should be useful to the 3D visual attention research community. Then, a learning-based visual attention model was designed to predict human attention. In addition to the popular 2D features, we included the depth map and its derived features. The results indicate that the extra depth information can enhance the saliency estimation accuracy specifically for close-up objects hidden in a complex-texture background. In addition, we examined the effectiveness of various low-, mid-, and high-level features on saliency prediction. Compared with both 2D and 3D state-of-the-art saliency estimation models, our methods show better performance on the 3D test images. The eye-tracking database and the MATLAB source codes for the proposed saliency model and evaluation methods are available on our website.

  3. A robust method for estimating motorbike count based on visual information learning

    NASA Astrophysics Data System (ADS)

    Huynh, Kien C.; Thai, Dung N.; Le, Sach T.; Thoai, Nam; Hamamoto, Kazuhiko

    2015-03-01

    Estimating the number of vehicles in traffic videos is an important and challenging task in traffic surveillance, especially with a high level of occlusions between vehicles, e.g.,in crowded urban area with people and/or motorbikes. In such the condition, the problem of separating individual vehicles from foreground silhouettes often requires complicated computation [1][2][3]. Thus, the counting problem is gradually shifted into drawing statistical inferences of target objects density from their shape [4], local features [5], etc. Those researches indicate a correlation between local features and the number of target objects. However, they are inadequate to construct an accurate model for vehicles density estimation. In this paper, we present a reliable method that is robust to illumination changes and partial affine transformations. It can achieve high accuracy in case of occlusions. Firstly, local features are extracted from images of the scene using Speed-Up Robust Features (SURF) method. For each image, a global feature vector is computed using a Bag-of-Words model which is constructed from the local features above. Finally, a mapping between the extracted global feature vectors and their labels (the number of motorbikes) is learned. That mapping provides us a strong prediction model for estimating the number of motorbikes in new images. The experimental results show that our proposed method can achieve a better accuracy in comparison to others.

  4. Nuclear morphometry in flat epithelial atypia of the breast as a predictor of malignancy: a digital image-based histopathologic analysis.

    PubMed

    Williams, Phillip A; Djordjevic, Bojana; Ayroud, Yasmine; Islam, Shahidul; Gravel, Denis; Robertson, Susan J; Parra-Herran, Carlos

    2014-12-01

    To identify morphometric features unique to flat epithelial atypia associated with cancer using digital image analysis. Cases with diagnosis of flat epithelial atypia were retrieved and divided into 2 groups: flat epithelial atypia associated with invasive or in situ carcinoma (n = 31) and those without malignancy (n = 27). Slides were digitally scanned. Nuclear features were analyzed on representative images at 20x magnification using digital morphometric software. Parameters related to nuclear shape and size (diameter, perimeter) were similar in both groups. However, cases with malignancy had significantly higher densitometric green (p = 0.02), red (p = 0.03), and grey (p = 0.02) scale levels as compared to cases without cancer. A mean grey densitometric level > 119.45 had 71% sensitivity and 70.4% specificity in detecting cases with concomitant carcinoma. Morphometry of features related to nuclear staining appears to be useful in predicting risk of concurrent malignancy in patients with flat epithelial atypia, when added to a comprehensive histopathologic evaluation.

  5. SU-F-R-36: Validating Quantitative Radiomic Texture Features for Oncologic PET: A Digital Phantom Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, F; Yang, Y; Young, L

    Purpose: Radiomic texture features derived from the oncologic PET have recently been brought under intense investigation within the context of patient stratification and treatment outcome prediction in a variety of cancer types; however, their validity has not yet been examined. This work is aimed to validate radiomic PET texture metrics through the use of realistic simulations in the ground truth setting. Methods: Simulation of FDG-PET was conducted by applying the Zubal phantom as an attenuation map to the SimSET software package that employs Monte Carlo techniques to model the physical process of emission imaging. A total of 15 irregularly-shaped lesionsmore » featuring heterogeneous activity distribution were simulated. For each simulated lesion, 28 texture features in relation to the intensity histograms (GLIH), grey-level co-occurrence matrices (GLCOM), neighborhood difference matrices (GLNDM), and zone size matrices (GLZSM) were evaluated and compared with their respective values extracted from the ground truth activity map. Results: In reference to the values from the ground truth images, texture parameters appearing on the simulated data varied with a range of 0.73–3026.2% for GLIH-based, 0.02–100.1% for GLCOM-based, 1.11–173.8% for GLNDM-based, and 0.35–66.3% for GLZSM-based. For majority of the examined texture metrics (16/28), their values on the simulated data differed significantly from those from the ground truth images (P-value ranges from <0.0001 to 0.04). Features not exhibiting significant difference comprised of GLIH-based standard deviation, GLCO-based energy and entropy, GLND-based coarseness and contrast, and GLZS-based low gray-level zone emphasis, high gray-level zone emphasis, short zone low gray-level emphasis, long zone low gray-level emphasis, long zone high gray-level emphasis, and zone size nonuniformity. Conclusion: The extent to which PET imaging disturbs texture appearance is feature-dependent and could be substantial. It is thus advised that use of PET texture parameters for predictive and prognostic measurements in oncologic setting awaits further systematic and critical evaluation.« less

  6. Beyond endoscopic assessment in inflammatory bowel disease: real-time histology of disease activity by non-linear multimodal imaging

    NASA Astrophysics Data System (ADS)

    Chernavskaia, Olga; Heuke, Sandro; Vieth, Michael; Friedrich, Oliver; Schürmann, Sebastian; Atreya, Raja; Stallmach, Andreas; Neurath, Markus F.; Waldner, Maximilian; Petersen, Iver; Schmitt, Michael; Bocklitz, Thomas; Popp, Jürgen

    2016-07-01

    Assessing disease activity is a prerequisite for an adequate treatment of inflammatory bowel diseases (IBD) such as Crohn’s disease and ulcerative colitis. In addition to endoscopic mucosal healing, histologic remission poses a promising end-point of IBD therapy. However, evaluating histological remission harbors the risk for complications due to the acquisition of biopsies and results in a delay of diagnosis because of tissue processing procedures. In this regard, non-linear multimodal imaging techniques might serve as an unparalleled technique that allows the real-time evaluation of microscopic IBD activity in the endoscopy unit. In this study, tissue sections were investigated using the non-linear multimodal microscopy combination of coherent anti-Stokes Raman scattering (CARS), two-photon excited auto fluorescence (TPEF) and second-harmonic generation (SHG). After the measurement a gold-standard assessment of histological indexes was carried out based on a conventional H&E stain. Subsequently, various geometry and intensity related features were extracted from the multimodal images. An optimized feature set was utilized to predict histological index levels based on a linear classifier. Based on the automated prediction, the diagnosis time interval is decreased. Therefore, non-linear multimodal imaging may provide a real-time diagnosis of IBD activity suited to assist clinical decision making within the endoscopy unit.

  7. Computer vision and machine learning for robust phenotyping in genome-wide studies

    PubMed Central

    Zhang, Jiaoping; Naik, Hsiang Sing; Assefa, Teshale; Sarkar, Soumik; Reddy, R. V. Chowda; Singh, Arti; Ganapathysubramanian, Baskar; Singh, Asheesh K.

    2017-01-01

    Traditional evaluation of crop biotic and abiotic stresses are time-consuming and labor-intensive limiting the ability to dissect the genetic basis of quantitative traits. A machine learning (ML)-enabled image-phenotyping pipeline for the genetic studies of abiotic stress iron deficiency chlorosis (IDC) of soybean is reported. IDC classification and severity for an association panel of 461 diverse plant-introduction accessions was evaluated using an end-to-end phenotyping workflow. The workflow consisted of a multi-stage procedure including: (1) optimized protocols for consistent image capture across plant canopies, (2) canopy identification and registration from cluttered backgrounds, (3) extraction of domain expert informed features from the processed images to accurately represent IDC expression, and (4) supervised ML-based classifiers that linked the automatically extracted features with expert-rating equivalent IDC scores. ML-generated phenotypic data were subsequently utilized for the genome-wide association study and genomic prediction. The results illustrate the reliability and advantage of ML-enabled image-phenotyping pipeline by identifying previously reported locus and a novel locus harboring a gene homolog involved in iron acquisition. This study demonstrates a promising path for integrating the phenotyping pipeline into genomic prediction, and provides a systematic framework enabling robust and quicker phenotyping through ground-based systems. PMID:28272456

  8. Radiomics Evaluation of Histological Heterogeneity Using Multiscale Textures Derived From 3D Wavelet Transformation of Multispectral Images.

    PubMed

    Chaddad, Ahmad; Daniel, Paul; Niazi, Tamim

    2018-01-01

    Colorectal cancer (CRC) is markedly heterogeneous and develops progressively toward malignancy through several stages which include stroma (ST), benign hyperplasia (BH), intraepithelial neoplasia (IN) or precursor cancerous lesion, and carcinoma (CA). Identification of the malignancy stage of CRC pathology tissues (PT) allows the most appropriate therapeutic intervention. This study investigates multiscale texture features extracted from CRC pathology sections using 3D wavelet transform (3D-WT) filter. Multiscale features were extracted from digital whole slide images of 39 patients that were segmented in a pre-processing step using an active contour model. The capacity for multiscale texture to compare and classify between PTs was investigated using ANOVA significance test and random forest classifier models, respectively. 12 significant features derived from the multiscale texture (i.e., variance, entropy, and energy) were found to discriminate between CRC grades at a significance value of p  < 0.01 after correction. Combining multiscale texture features lead to a better predictive capacity compared to prediction models based on individual scale features with an average (±SD) classification accuracy of 93.33 (±3.52)%, sensitivity of 88.33 (± 4.12)%, and specificity of 96.89 (± 3.88)%. Entropy was found to be the best classifier feature across all the PT grades with an average of the area under the curve (AUC) value of 91.17, 94.21, 97.70, 100% for ST, BH, IN, and CA, respectively. Our results suggest that multiscale texture features based on 3D-WT are sensitive enough to discriminate between CRC grades with the entropy feature, the best predictor of pathology grade.

  9. Breast MRI radiogenomics: Current status and research implications.

    PubMed

    Grimm, Lars J

    2016-06-01

    Breast magnetic resonance imaging (MRI) radiogenomics is an emerging area of research that has the potential to directly influence clinical practice. Clinical MRI scanners today are capable of providing excellent temporal and spatial resolution, which allows extraction of numerous imaging features via human extraction approaches or complex computer vision algorithms. Meanwhile, advances in breast cancer genetics research has resulted in the identification of promising genes associated with cancer outcomes. In addition, validated genomic signatures have been developed that allow categorization of breast cancers into distinct molecular subtypes as well as predict the risk of cancer recurrence and response to therapy. Current radiogenomics research has been directed towards exploratory analysis of individual genes, understanding tumor biology, and developing imaging surrogates to genetic analysis with the long-term goal of developing a meaningful tool for clinical care. The background of breast MRI radiogenomics research, image feature extraction techniques, approaches to radiogenomics research, and promising areas of investigation are reviewed. J. Magn. Reson. Imaging 2016;43:1269-1278. © 2015 Wiley Periodicals, Inc.

  10. Enhancing interpretability of automatically extracted machine learning features: application to a RBM-Random Forest system on brain lesion segmentation.

    PubMed

    Pereira, Sérgio; Meier, Raphael; McKinley, Richard; Wiest, Roland; Alves, Victor; Silva, Carlos A; Reyes, Mauricio

    2018-02-01

    Machine learning systems are achieving better performances at the cost of becoming increasingly complex. However, because of that, they become less interpretable, which may cause some distrust by the end-user of the system. This is especially important as these systems are pervasively being introduced to critical domains, such as the medical field. Representation Learning techniques are general methods for automatic feature computation. Nevertheless, these techniques are regarded as uninterpretable "black boxes". In this paper, we propose a methodology to enhance the interpretability of automatically extracted machine learning features. The proposed system is composed of a Restricted Boltzmann Machine for unsupervised feature learning, and a Random Forest classifier, which are combined to jointly consider existing correlations between imaging data, features, and target variables. We define two levels of interpretation: global and local. The former is devoted to understanding if the system learned the relevant relations in the data correctly, while the later is focused on predictions performed on a voxel- and patient-level. In addition, we propose a novel feature importance strategy that considers both imaging data and target variables, and we demonstrate the ability of the approach to leverage the interpretability of the obtained representation for the task at hand. We evaluated the proposed methodology in brain tumor segmentation and penumbra estimation in ischemic stroke lesions. We show the ability of the proposed methodology to unveil information regarding relationships between imaging modalities and extracted features and their usefulness for the task at hand. In both clinical scenarios, we demonstrate that the proposed methodology enhances the interpretability of automatically learned features, highlighting specific learning patterns that resemble how an expert extracts relevant data from medical images. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Fundus Image Features Extraction for Exudate Mining in Coordination with Content Based Image Retrieval: A Study

    NASA Astrophysics Data System (ADS)

    Gururaj, C.; Jayadevappa, D.; Tunga, Satish

    2018-02-01

    Medical field has seen a phenomenal improvement over the previous years. The invention of computers with appropriate increase in the processing and internet speed has changed the face of the medical technology. However there is still scope for improvement of the technologies in use today. One of the many such technologies of medical aid is the detection of afflictions of the eye. Although a repertoire of research has been accomplished in this field, most of them fail to address how to take the detection forward to a stage where it will be beneficial to the society at large. An automated system that can predict the current medical condition of a patient after taking the fundus image of his eye is yet to see the light of the day. Such a system is explored in this paper by summarizing a number of techniques for fundus image features extraction, predominantly hard exudate mining, coupled with Content Based Image Retrieval to develop an automation tool. The knowledge of the same would bring about worthy changes in the domain of exudates extraction of the eye. This is essential in cases where the patients may not have access to the best of technologies. This paper attempts at a comprehensive summary of the techniques for Content Based Image Retrieval (CBIR) or fundus features image extraction, and few choice methods of both, and an exploration which aims to find ways to combine these two attractive features, and combine them so that it is beneficial to all.

  12. Fundus Image Features Extraction for Exudate Mining in Coordination with Content Based Image Retrieval: A Study

    NASA Astrophysics Data System (ADS)

    Gururaj, C.; Jayadevappa, D.; Tunga, Satish

    2018-06-01

    Medical field has seen a phenomenal improvement over the previous years. The invention of computers with appropriate increase in the processing and internet speed has changed the face of the medical technology. However there is still scope for improvement of the technologies in use today. One of the many such technologies of medical aid is the detection of afflictions of the eye. Although a repertoire of research has been accomplished in this field, most of them fail to address how to take the detection forward to a stage where it will be beneficial to the society at large. An automated system that can predict the current medical condition of a patient after taking the fundus image of his eye is yet to see the light of the day. Such a system is explored in this paper by summarizing a number of techniques for fundus image features extraction, predominantly hard exudate mining, coupled with Content Based Image Retrieval to develop an automation tool. The knowledge of the same would bring about worthy changes in the domain of exudates extraction of the eye. This is essential in cases where the patients may not have access to the best of technologies. This paper attempts at a comprehensive summary of the techniques for Content Based Image Retrieval (CBIR) or fundus features image extraction, and few choice methods of both, and an exploration which aims to find ways to combine these two attractive features, and combine them so that it is beneficial to all.

  13. Incorporating High-Frequency Physiologic Data Using Computational Dictionary Learning Improves Prediction of Delayed Cerebral Ischemia Compared to Existing Methods.

    PubMed

    Megjhani, Murad; Terilli, Kalijah; Frey, Hans-Peter; Velazquez, Angela G; Doyle, Kevin William; Connolly, Edward Sander; Roh, David Jinou; Agarwal, Sachin; Claassen, Jan; Elhadad, Noemie; Park, Soojin

    2018-01-01

    Accurate prediction of delayed cerebral ischemia (DCI) after subarachnoid hemorrhage (SAH) can be critical for planning interventions to prevent poor neurological outcome. This paper presents a model using convolution dictionary learning to extract features from physiological data available from bedside monitors. We develop and validate a prediction model for DCI after SAH, demonstrating improved precision over standard methods alone. 488 consecutive SAH admissions from 2006 to 2014 to a tertiary care hospital were included. Models were trained on 80%, while 20% were set aside for validation testing. Modified Fisher Scale was considered the standard grading scale in clinical use; baseline features also analyzed included age, sex, Hunt-Hess, and Glasgow Coma Scales. An unsupervised approach using convolution dictionary learning was used to extract features from physiological time series (systolic blood pressure and diastolic blood pressure, heart rate, respiratory rate, and oxygen saturation). Classifiers (partial least squares and linear and kernel support vector machines) were trained on feature subsets of the derivation dataset. Models were applied to the validation dataset. The performances of the best classifiers on the validation dataset are reported by feature subset. Standard grading scale (mFS): AUC 0.54. Combined demographics and grading scales (baseline features): AUC 0.63. Kernel derived physiologic features: AUC 0.66. Combined baseline and physiologic features with redundant feature reduction: AUC 0.71 on derivation dataset and 0.78 on validation dataset. Current DCI prediction tools rely on admission imaging and are advantageously simple to employ. However, using an agnostic and computationally inexpensive learning approach for high-frequency physiologic time series data, we demonstrated that we could incorporate individual physiologic data to achieve higher classification accuracy.

  14. PREDICTION OF SOLAR FLARES USING UNIQUE SIGNATURES OF MAGNETIC FIELD IMAGES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raboonik, Abbas; Safari, Hossein; Alipour, Nasibe

    Prediction of solar flares is an important task in solar physics. The occurrence of solar flares is highly dependent on the structure and topology of solar magnetic fields. A new method for predicting large (M- and X-class) flares is presented, which uses machine learning methods applied to the Zernike moments (ZM) of magnetograms observed by the Helioseismic and Magnetic Imager on board the Solar Dynamics Observatory for a period of six years from 2010 June 2 to 2016 August 1. Magnetic field images consisting of the radial component of the magnetic field are converted to finite sets of ZMs andmore » fed to the support vector machine classifier. ZMs have the capability to elicit unique features from any 2D image, which may allow more accurate classification. The results indicate whether an arbitrary active region has the potential to produce at least one large flare. We show that the majority of large flares can be predicted within 48 hr before their occurrence, with only 10 false negatives out of 385 flaring active region magnetograms and 21 false positives out of 179 non-flaring active region magnetograms. Our method may provide a useful tool for the prediction of solar flares, which can be employed alongside other forecasting methods.« less

  15. Intelligent MRTD testing for thermal imaging system using ANN

    NASA Astrophysics Data System (ADS)

    Sun, Junyue; Ma, Dongmei

    2006-01-01

    The Minimum Resolvable Temperature Difference (MRTD) is the most widely accepted figure for describing the performance of a thermal imaging system. Many models have been proposed to predict it. The MRTD testing is a psychophysical task, for which biases are unavoidable. It requires laboratory conditions such as normal air condition and a constant temperature. It also needs expensive measuring equipments and takes a considerable period of time. Especially when measuring imagers of the same type, the test is time consuming. So an automated and intelligent measurement method should be discussed. This paper adopts the concept of automated MRTD testing using boundary contour system and fuzzy ARTMAP, but uses different methods. It describes an Automated MRTD Testing procedure basing on Back-Propagation Network. Firstly, we use frame grabber to capture the 4-bar target image data. Then according to image gray scale, we segment the image to get 4-bar place and extract feature vector representing the image characteristic and human detection ability. These feature sets, along with known target visibility, are used to train the ANN (Artificial Neural Networks). Actually it is a nonlinear classification (of input dimensions) of the image series using ANN. Our task is to justify if image is resolvable or uncertainty. Then the trained ANN will emulate observer performance in determining MRTD. This method can reduce the uncertainties between observers and long time dependent factors by standardization. This paper will introduce the feature extraction algorithm, demonstrate the feasibility of the whole process and give the accuracy of MRTD measurement.

  16. Confocal arthroscopy-based patient-specific constitutive models of cartilaginous tissues - II: prediction of reaction force history of meniscal cartilage specimens.

    PubMed

    Taylor, Zeike A; Kirk, Thomas B; Miller, Karol

    2007-10-01

    The theoretical framework developed in a companion paper (Part I) is used to derive estimates of mechanical response of two meniscal cartilage specimens. The previously developed framework consisted of a constitutive model capable of incorporating confocal image-derived tissue microstructural data. In the present paper (Part II) fibre and matrix constitutive parameters are first estimated from mechanical testing of a batch of specimens similar to, but independent from those under consideration. Image analysis techniques which allow estimation of tissue microstructural parameters form confocal images are presented. The constitutive model and image-derived structural parameters are then used to predict the reaction force history of the two meniscal specimens subjected to partially confined compression. The predictions are made on the basis of the specimens' individual structural condition as assessed by confocal microscopy and involve no tuning of material parameters. Although the model does not reproduce all features of the experimental curves, as an unfitted estimate of mechanical response the prediction is quite accurate. In light of the obtained results it is judged that more general non-invasive estimation of tissue mechanical properties is possible using the developed framework.

  17. Use of Fetal Magnetic Resonance Image Analysis and Machine Learning to Predict the Need for Postnatal Cerebrospinal Fluid Diversion in Fetal Ventriculomegaly.

    PubMed

    Pisapia, Jared M; Akbari, Hamed; Rozycki, Martin; Goldstein, Hannah; Bakas, Spyridon; Rathore, Saima; Moldenhauer, Julie S; Storm, Phillip B; Zarnow, Deborah M; Anderson, Richard C E; Heuer, Gregory G; Davatzikos, Christos

    2018-02-01

    Which children with fetal ventriculomegaly, or enlargement of the cerebral ventricles in utero, will develop hydrocephalus requiring treatment after birth is unclear. To determine whether extraction of multiple imaging features from fetal magnetic resonance imaging (MRI) and integration using machine learning techniques can predict which patients require postnatal cerebrospinal fluid (CSF) diversion after birth. This retrospective case-control study used an institutional database of 253 patients with fetal ventriculomegaly from January 1, 2008, through December 31, 2014, to generate a predictive model. Data were analyzed from January 1, 2008, through December 31, 2015. All 25 patients who required postnatal CSF diversion were selected and matched by gestational age with 25 patients with fetal ventriculomegaly who did not require CSF diversion (discovery cohort). The model was applied to a sample of 24 consecutive patients with fetal ventriculomegaly who underwent evaluation at a separate institution (replication cohort) from January 1, 1998, through December 31, 2007. Data were analyzed from January 1, 1998, through December 31, 2009. To generate the model, linear measurements, area, volume, and morphologic features were extracted from the fetal MRI, and a machine learning algorithm analyzed multiple features simultaneously to find the combination that was most predictive of the need for postnatal CSF diversion. Accuracy, sensitivity, and specificity of the model in correctly classifying patients requiring postnatal CSF diversion. A total of 74 patients (41 girls [55%] and 33 boys [45%]; mean [SD] gestational age, 27.0 [5.6] months) were included from both cohorts. In the discovery cohort, median time to CSF diversion was 6 days (interquartile range [IQR], 2-51 days), and patients with fetal ventriculomegaly who did not develop symptoms were followed up for a median of 29 months (IQR, 9-46 months). The model correctly classified patients who required CSF diversion with 82% accuracy, 80% sensitivity, and 84% specificity. In the replication cohort, the model achieved 91% accuracy, 75% sensitivity, and 95% specificity. Image analysis and machine learning can be applied to fetal MRI findings to predict the need for postnatal CSF diversion. The model provides prognostic information that may guide clinical management and select candidates for potential fetal surgical intervention.

  18. Image quality prediction: an aid to the Viking Lander imaging investigation on Mars.

    PubMed

    Huck, F O; Wall, S D

    1976-07-01

    Two Viking spacecraft scheduled to land on Mars in the summer of 1976 will return multispectral panoramas of the Martian surface with resolutions 4 orders of magnitude higher than have been previously obtained and stereo views with resolutions approaching that of the human eye. Mission constraints and uncertainties require a carefully planned imaging investigation that is supported by a computer model of camera response and surface features to aid in diagnosing camera performance, in establishing a preflight imaging strategy, and in rapidly revising this strategy if pictures returned from Mars reveal unfavorable or unanticipated conditions.

  19. Modeling Habitat Suitability of Migratory Birds from Remote Sensing Images Using Convolutional Neural Networks

    PubMed Central

    Su, Jin-He; Piao, Ying-Chao; Luo, Ze; Yan, Bao-Ping

    2018-01-01

    Simple Summary The understanding of the spatio-temporal distribution of the species habitats would facilitate wildlife resource management and conservation efforts. Existing methods have poor performance due to the limited availability of training samples. More recently, location-aware sensors have been widely used to track animal movements. The aim of the study was to generate suitability maps of bar-head geese using movement data coupled with environmental parameters, such as remote sensing images and temperature data. Therefore, we modified a deep convolutional neural network for the multi-scale inputs. The results indicate that the proposed method can identify the areas with the dense goose species around Qinghai Lake. In addition, this approach might also be interesting for implementation in other species with different niche factors or in areas where biological survey data are scarce. Abstract With the application of various data acquisition devices, a large number of animal movement data can be used to label presence data in remote sensing images and predict species distribution. In this paper, a two-stage classification approach for combining movement data and moderate-resolution remote sensing images was proposed. First, we introduced a new density-based clustering method to identify stopovers from migratory birds’ movement data and generated classification samples based on the clustering result. We split the remote sensing images into 16 × 16 patches and labeled them as positive samples if they have overlap with stopovers. Second, a multi-convolution neural network model is proposed for extracting the features from temperature data and remote sensing images, respectively. Then a Support Vector Machines (SVM) model was used to combine the features together and predict classification results eventually. The experimental analysis was carried out on public Landsat 5 TM images and a GPS dataset was collected on 29 birds over three years. The results indicated that our proposed method outperforms the existing baseline methods and was able to achieve good performance in habitat suitability prediction. PMID:29701686

  20. Computed Tomography Imaging Features in Acute Uncomplicated Stanford Type-B Aortic Dissection Predict Late Adverse Events.

    PubMed

    Sailer, Anna M; van Kuijk, Sander M J; Nelemans, Patricia J; Chin, Anne S; Kino, Aya; Huininga, Mark; Schmidt, Johanna; Mistelbauer, Gabriel; Bäumler, Kathrin; Chiu, Peter; Fischbein, Michael P; Dake, Michael D; Miller, D Craig; Schurink, Geert Willem H; Fleischmann, Dominik

    2017-04-01

    Medical treatment of initially uncomplicated acute Stanford type-B aortic dissection is associated with a high rate of late adverse events. Identification of individuals who potentially benefit from preventive endografting is highly desirable. The association of computed tomography imaging features with late adverse events was retrospectively assessed in 83 patients with acute uncomplicated Stanford type-B aortic dissection, followed over a median of 850 (interquartile range 247-1824) days. Adverse events were defined as fatal or nonfatal aortic rupture, rapid aortic growth (>10 mm/y), aneurysm formation (≥6 cm), organ or limb ischemia, or new uncontrollable hypertension or pain. Five significant predictors were identified using multivariable Cox regression analysis: connective tissue disease (hazard ratio [HR] 2.94, 95% confidence interval [CI]: 1.29-6.72; P =0.01), circumferential extent of false lumen in angular degrees (HR 1.03 per degree, 95% CI: 1.01-1.04, P =0.003), maximum aortic diameter (HR 1.10 per mm, 95% CI: 1.02-1.18, P =0.015), false lumen outflow (HR 0.999 per mL/min, 95% CI: 0.998-1.000; P =0.055), and number of intercostal arteries (HR 0.89 per n, 95% CI: 0.80-0.98; P =0.024). A prediction model was constructed to calculate patient specific risk at 1, 2, and 5 years and to stratify patients into high-, intermediate-, and low-risk groups. The model was internally validated by bootstrapping and showed good discriminatory ability with an optimism-corrected C statistic of 70.1%. Computed tomography imaging-based morphological features combined into a prediction model may be able to identify patients at high risk for late adverse events after an initially uncomplicated type-B aortic dissection. © 2017 American Heart Association, Inc.

  1. Localized thin-section CT with radiomics feature extraction and machine learning to classify early-detected pulmonary nodules from lung cancer screening

    NASA Astrophysics Data System (ADS)

    Tu, Shu-Ju; Wang, Chih-Wei; Pan, Kuang-Tse; Wu, Yi-Cheng; Wu, Chen-Te

    2018-03-01

    Lung cancer screening aims to detect small pulmonary nodules and decrease the mortality rate of those affected. However, studies from large-scale clinical trials of lung cancer screening have shown that the false-positive rate is high and positive predictive value is low. To address these problems, a technical approach is greatly needed for accurate malignancy differentiation among these early-detected nodules. We studied the clinical feasibility of an additional protocol of localized thin-section CT for further assessment on recalled patients from lung cancer screening tests. Our approach of localized thin-section CT was integrated with radiomics features extraction and machine learning classification which was supervised by pathological diagnosis. Localized thin-section CT images of 122 nodules were retrospectively reviewed and 374 radiomics features were extracted. In this study, 48 nodules were benign and 74 malignant. There were nine patients with multiple nodules and four with synchronous multiple malignant nodules. Different machine learning classifiers with a stratified ten-fold cross-validation were used and repeated 100 times to evaluate classification accuracy. Of the image features extracted from the thin-section CT images, 238 (64%) were useful in differentiating between benign and malignant nodules. These useful features include CT density (p  =  0.002 518), sigma (p  =  0.002 781), uniformity (p  =  0.032 41), and entropy (p  =  0.006 685). The highest classification accuracy was 79% by the logistic classifier. The performance metrics of this logistic classification model was 0.80 for the positive predictive value, 0.36 for the false-positive rate, and 0.80 for the area under the receiver operating characteristic curve. Our approach of direct risk classification supervised by the pathological diagnosis with localized thin-section CT and radiomics feature extraction may support clinical physicians in determining truly malignant nodules and therefore reduce problems in lung cancer screening.

  2. Localized thin-section CT with radiomics feature extraction and machine learning to classify early-detected pulmonary nodules from lung cancer screening.

    PubMed

    Tu, Shu-Ju; Wang, Chih-Wei; Pan, Kuang-Tse; Wu, Yi-Cheng; Wu, Chen-Te

    2018-03-14

    Lung cancer screening aims to detect small pulmonary nodules and decrease the mortality rate of those affected. However, studies from large-scale clinical trials of lung cancer screening have shown that the false-positive rate is high and positive predictive value is low. To address these problems, a technical approach is greatly needed for accurate malignancy differentiation among these early-detected nodules. We studied the clinical feasibility of an additional protocol of localized thin-section CT for further assessment on recalled patients from lung cancer screening tests. Our approach of localized thin-section CT was integrated with radiomics features extraction and machine learning classification which was supervised by pathological diagnosis. Localized thin-section CT images of 122 nodules were retrospectively reviewed and 374 radiomics features were extracted. In this study, 48 nodules were benign and 74 malignant. There were nine patients with multiple nodules and four with synchronous multiple malignant nodules. Different machine learning classifiers with a stratified ten-fold cross-validation were used and repeated 100 times to evaluate classification accuracy. Of the image features extracted from the thin-section CT images, 238 (64%) were useful in differentiating between benign and malignant nodules. These useful features include CT density (p  =  0.002 518), sigma (p  =  0.002 781), uniformity (p  =  0.032 41), and entropy (p  =  0.006 685). The highest classification accuracy was 79% by the logistic classifier. The performance metrics of this logistic classification model was 0.80 for the positive predictive value, 0.36 for the false-positive rate, and 0.80 for the area under the receiver operating characteristic curve. Our approach of direct risk classification supervised by the pathological diagnosis with localized thin-section CT and radiomics feature extraction may support clinical physicians in determining truly malignant nodules and therefore reduce problems in lung cancer screening.

  3. Decoding of visual activity patterns from fMRI responses using multivariate pattern analyses and convolutional neural network.

    PubMed

    Zafar, Raheel; Kamel, Nidal; Naufal, Mohamad; Malik, Aamir Saeed; Dass, Sarat C; Ahmad, Rana Fayyaz; Abdullah, Jafri M; Reza, Faruque

    2017-01-01

    Decoding of human brain activity has always been a primary goal in neuroscience especially with functional magnetic resonance imaging (fMRI) data. In recent years, Convolutional neural network (CNN) has become a popular method for the extraction of features due to its higher accuracy, however it needs a lot of computation and training data. In this study, an algorithm is developed using Multivariate pattern analysis (MVPA) and modified CNN to decode the behavior of brain for different images with limited data set. Selection of significant features is an important part of fMRI data analysis, since it reduces the computational burden and improves the prediction performance; significant features are selected using t-test. MVPA uses machine learning algorithms to classify different brain states and helps in prediction during the task. General linear model (GLM) is used to find the unknown parameters of every individual voxel and the classification is done using multi-class support vector machine (SVM). MVPA-CNN based proposed algorithm is compared with region of interest (ROI) based method and MVPA based estimated values. The proposed method showed better overall accuracy (68.6%) compared to ROI (61.88%) and estimation values (64.17%).

  4. A radiomics model from joint FDG-PET and MRI texture features for the prediction of lung metastases in soft-tissue sarcomas of the extremities

    NASA Astrophysics Data System (ADS)

    Vallières, M.; Freeman, C. R.; Skamene, S. R.; El Naqa, I.

    2015-07-01

    This study aims at developing a joint FDG-PET and MRI texture-based model for the early evaluation of lung metastasis risk in soft-tissue sarcomas (STSs). We investigate if the creation of new composite textures from the combination of FDG-PET and MR imaging information could better identify aggressive tumours. Towards this goal, a cohort of 51 patients with histologically proven STSs of the extremities was retrospectively evaluated. All patients had pre-treatment FDG-PET and MRI scans comprised of T1-weighted and T2-weighted fat-suppression sequences (T2FS). Nine non-texture features (SUV metrics and shape features) and forty-one texture features were extracted from the tumour region of separate (FDG-PET, T1 and T2FS) and fused (FDG-PET/T1 and FDG-PET/T2FS) scans. Volume fusion of the FDG-PET and MRI scans was implemented using the wavelet transform. The influence of six different extraction parameters on the predictive value of textures was investigated. The incorporation of features into multivariable models was performed using logistic regression. The multivariable modeling strategy involved imbalance-adjusted bootstrap resampling in the following four steps leading to final prediction model construction: (1) feature set reduction; (2) feature selection; (3) prediction performance estimation; and (4) computation of model coefficients. Univariate analysis showed that the isotropic voxel size at which texture features were extracted had the most impact on predictive value. In multivariable analysis, texture features extracted from fused scans significantly outperformed those from separate scans in terms of lung metastases prediction estimates. The best performance was obtained using a combination of four texture features extracted from FDG-PET/T1 and FDG-PET/T2FS scans. This model reached an area under the receiver-operating characteristic curve of 0.984 ± 0.002, a sensitivity of 0.955 ± 0.006, and a specificity of 0.926 ± 0.004 in bootstrapping evaluations. Ultimately, lung metastasis risk assessment at diagnosis of STSs could improve patient outcomes by allowing better treatment adaptation.

  5. Detection of reflecting surfaces by a statistical model

    NASA Astrophysics Data System (ADS)

    He, Qiang; Chu, Chee-Hung H.

    2009-02-01

    Remote sensing is widely used assess the destruction from natural disasters and to plan relief and recovery operations. How to automatically extract useful features and segment interesting objects from digital images, including remote sensing imagery, becomes a critical task for image understanding. Unfortunately, current research on automated feature extraction is ignorant of contextual information. As a result, the fidelity of populating attributes corresponding to interesting features and objects cannot be satisfied. In this paper, we present an exploration on meaningful object extraction integrating reflecting surfaces. Detection of specular reflecting surfaces can be useful in target identification and then can be applied to environmental monitoring, disaster prediction and analysis, military, and counter-terrorism. Our method is based on a statistical model to capture the statistical properties of specular reflecting surfaces. And then the reflecting surfaces are detected through cluster analysis.

  6. Performance prediction of optical image stabilizer using SVM for shaker-free production line

    NASA Astrophysics Data System (ADS)

    Kim, HyungKwan; Lee, JungHyun; Hyun, JinWook; Lim, Haekeun; Kim, GyuYeol; Moon, HyukSoo

    2016-04-01

    Recent smartphones adapt the camera module with optical image stabilizer(OIS) to enhance imaging quality in handshaking conditions. However, compared to the non-OIS camera module, the cost for implementing the OIS module is still high. One reason is that the production line for the OIS camera module requires a highly precise shaker table in final test process, which increases the unit cost of the production. In this paper, we propose a framework for the OIS quality prediction that is trained with the support vector machine and following module characterizing features : noise spectral density of gyroscope, optically measured linearity and cross-axis movement of hall and actuator. The classifier was tested on an actual production line and resulted in 88% accuracy of recall rate.

  7. An efficient sampling algorithm for uncertain abnormal data detection in biomedical image processing and disease prediction.

    PubMed

    Liu, Fei; Zhang, Xi; Jia, Yan

    2015-01-01

    In this paper, we propose a computer information processing algorithm that can be used for biomedical image processing and disease prediction. A biomedical image is considered a data object in a multi-dimensional space. Each dimension is a feature that can be used for disease diagnosis. We introduce a new concept of the top (k1,k2) outlier. It can be used to detect abnormal data objects in the multi-dimensional space. This technique focuses on uncertain space, where each data object has several possible instances with distinct probabilities. We design an efficient sampling algorithm for the top (k1,k2) outlier in uncertain space. Some improvement techniques are used for acceleration. Experiments show our methods' high accuracy and high efficiency.

  8. Explaining neural signals in human visual cortex with an associative learning model.

    PubMed

    Jiang, Jiefeng; Schmajuk, Nestor; Egner, Tobias

    2012-08-01

    "Predictive coding" models posit a key role for associative learning in visual cognition, viewing perceptual inference as a process of matching (learned) top-down predictions (or expectations) against bottom-up sensory evidence. At the neural level, these models propose that each region along the visual processing hierarchy entails one set of processing units encoding predictions of bottom-up input, and another set computing mismatches (prediction error or surprise) between predictions and evidence. This contrasts with traditional views of visual neurons operating purely as bottom-up feature detectors. In support of the predictive coding hypothesis, a recent human neuroimaging study (Egner, Monti, & Summerfield, 2010) showed that neural population responses to expected and unexpected face and house stimuli in the "fusiform face area" (FFA) could be well-described as a summation of hypothetical face-expectation and -surprise signals, but not by feature detector responses. Here, we used computer simulations to test whether these imaging data could be formally explained within the broader framework of a mathematical neural network model of associative learning (Schmajuk, Gray, & Lam, 1996). Results show that FFA responses could be fit very closely by model variables coding for conditional predictions (and their violations) of stimuli that unconditionally activate the FFA. These data document that neural population signals in the ventral visual stream that deviate from classic feature detection responses can formally be explained by associative prediction and surprise signals.

  9. Improved classification accuracy of powdery mildew infection levels of wine grapes by spatial-spectral analysis of hyperspectral images.

    PubMed

    Knauer, Uwe; Matros, Andrea; Petrovic, Tijana; Zanker, Timothy; Scott, Eileen S; Seiffert, Udo

    2017-01-01

    Hyperspectral imaging is an emerging means of assessing plant vitality, stress parameters, nutrition status, and diseases. Extraction of target values from the high-dimensional datasets either relies on pixel-wise processing of the full spectral information, appropriate selection of individual bands, or calculation of spectral indices. Limitations of such approaches are reduced classification accuracy, reduced robustness due to spatial variation of the spectral information across the surface of the objects measured as well as a loss of information intrinsic to band selection and use of spectral indices. In this paper we present an improved spatial-spectral segmentation approach for the analysis of hyperspectral imaging data and its application for the prediction of powdery mildew infection levels (disease severity) of intact Chardonnay grape bunches shortly before veraison. Instead of calculating texture features (spatial features) for the huge number of spectral bands independently, dimensionality reduction by means of Linear Discriminant Analysis (LDA) was applied first to derive a few descriptive image bands. Subsequent classification was based on modified Random Forest classifiers and selective extraction of texture parameters from the integral image representation of the image bands generated. Dimensionality reduction, integral images, and the selective feature extraction led to improved classification accuracies of up to [Formula: see text] for detached berries used as a reference sample (training dataset). Our approach was validated by predicting infection levels for a sample of 30 intact bunches. Classification accuracy improved with the number of decision trees of the Random Forest classifier. These results corresponded with qPCR results. An accuracy of 0.87 was achieved in classification of healthy, infected, and severely diseased bunches. However, discrimination between visually healthy and infected bunches proved to be challenging for a few samples, perhaps due to colonized berries or sparse mycelia hidden within the bunch or airborne conidia on the berries that were detected by qPCR. An advanced approach to hyperspectral image classification based on combined spatial and spectral image features, potentially applicable to many available hyperspectral sensor technologies, has been developed and validated to improve the detection of powdery mildew infection levels of Chardonnay grape bunches. The spatial-spectral approach improved especially the detection of light infection levels compared with pixel-wise spectral data analysis. This approach is expected to improve the speed and accuracy of disease detection once the thresholds for fungal biomass detected by hyperspectral imaging are established; it can also facilitate monitoring in plant phenotyping of grapevine and additional crops.

  10. Improved GSO Optimized ESN Soft-Sensor Model of Flotation Process Based on Multisource Heterogeneous Information Fusion

    PubMed Central

    Wang, Jie-sheng; Han, Shuang; Shen, Na-na

    2014-01-01

    For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, an echo state network (ESN) based fusion soft-sensor model optimized by the improved glowworm swarm optimization (GSO) algorithm is proposed. Firstly, the color feature (saturation and brightness) and texture features (angular second moment, sum entropy, inertia moment, etc.) based on grey-level co-occurrence matrix (GLCM) are adopted to describe the visual characteristics of the flotation froth image. Then the kernel principal component analysis (KPCA) method is used to reduce the dimensionality of the high-dimensional input vector composed by the flotation froth image characteristics and process datum and extracts the nonlinear principal components in order to reduce the ESN dimension and network complex. The ESN soft-sensor model of flotation process is optimized by the GSO algorithm with congestion factor. Simulation results show that the model has better generalization and prediction accuracy to meet the online soft-sensor requirements of the real-time control in the flotation process. PMID:24982935

  11. Vision-guided gripping of a cylinder

    NASA Technical Reports Server (NTRS)

    Nicewarner, Keith E.; Kelley, Robert B.

    1991-01-01

    The motivation for vision-guided servoing is taken from tasks in automated or telerobotic space assembly and construction. Vision-guided servoing requires the ability to perform rapid pose estimates and provide predictive feature tracking. Monocular information from a gripper-mounted camera is used to servo the gripper to grasp a cylinder. The procedure is divided into recognition and servo phases. The recognition stage verifies the presence of a cylinder in the camera field of view. Then an initial pose estimate is computed and uncluttered scan regions are selected. The servo phase processes only the selected scan regions of the image. Given the knowledge, from the recognition phase, that there is a cylinder in the image and knowing the radius of the cylinder, 4 of the 6 pose parameters can be estimated with minimal computation. The relative motion of the cylinder is obtained by using the current pose and prior pose estimates. The motion information is then used to generate a predictive feature-based trajectory for the path of the gripper.

  12. [Magnetic Resonance Imaging Conversion Predictors of Clinically Isolated Syndrome to Multiple Sclerosis].

    PubMed

    Peixoto, Sara; Abreu, Pedro

    2016-11-01

    Clinically isolated syndrome may be the first manifestation of multiple sclerosis, a chronic demyelinating disease of the central nervous system, and it is defined by a single clinical episode suggestive of demyelination. However, patients with this syndrome, even with long term follow up, may not develop new symptoms or demyelinating lesions that fulfils multiple sclerosis diagnostic criteria. We reviewed, in clinically isolated syndrome, what are the best magnetic resonance imaging findings that may predict its conversion to multiple sclerosis. A search was made in the PubMed database for papers published between January 2010 and June 2015 using the following terms: 'clinically isolated syndrome', 'cis', 'multiple sclerosis', 'magnetic resonance imaging', 'magnetic resonance' and 'mri'. In this review, the following conventional magnetic resonance imaging abnormalities found in literature were included: lesion load, lesion location, Barkhof's criteria and brain atrophy related features. The non conventional magnetic resonance imaging techniques studied were double inversion recovery, magnetization transfer imaging, spectroscopy and diffusion tensor imaging. The number and location of demyelinating lesions have a clear role in predicting clinically isolated syndrome conversion to multiple sclerosis. On the other hand, more data are needed to confirm the ability to predict this disease development of non conventional techniques and remaining neuroimaging abnormalities. In forthcoming years, in addition to the established predictive value of the above mentioned neuroimaging abnormalities, different clinically isolated syndrome neuroradiological findings may be considered in multiple sclerosis diagnostic criteria and/or change its treatment recommendations.

  13. Second harmonic generation imaging as a potential tool for staging pregnancy and predicting preterm birth

    NASA Astrophysics Data System (ADS)

    Akins, Meredith L.; Luby-Phelps, Katherine; Mahendroo, Mala

    2010-03-01

    We use second harmonic generation (SHG) microscopy to assess changes in collagen structure of murine cervix during cervical remodeling of normal pregnancy and in a preterm birth model. Visual inspection of SHG images revealed substantial changes in collagen morphology throughout normal gestation. SHG images collected in both the forward and backward directions were analyzed quantitatively for changes in overall mean intensity, forward to backward intensity ratio, collagen fiber size, and porosity. Changes in mean SHG intensity and intensity ratio take place in early pregnancy, suggesting that submicroscopic changes in collagen fibril size and arrangement occur before macroscopic changes become evident. Fiber size progressively increased from early to late pregnancy, while pores between collagen fibers became larger and farther apart. Analysis of collagen features in premature cervical remodeling show that changes in collagen structure are dissimilar from normal remodeling. The ability to quantify multiple morphological features of collagen that characterize normal cervical remodeling and distinguish abnormal remodeling in preterm birth models supports future studies aimed at development of SHG endoscopic devices for clinical assessment of collagen changes during pregnancy in women and for predicting risk of preterm labor which occurs in 12.5% of all pregnancies.

  14. Detection and recognition of simple spatial forms

    NASA Technical Reports Server (NTRS)

    Watson, A. B.

    1983-01-01

    A model of human visual sensitivity to spatial patterns is constructed. The model predicts the visibility and discriminability of arbitrary two-dimensional monochrome images. The image is analyzed by a large array of linear feature sensors, which differ in spatial frequency, phase, orientation, and position in the visual field. All sensors have one octave frequency bandwidths, and increase in size linearly with eccentricity. Sensor responses are processed by an ideal Bayesian classifier, subject to uncertainty. The performance of the model is compared to that of the human observer in detecting and discriminating some simple images.

  15. [Identification of green tea brand based on hyperspectra imaging technology].

    PubMed

    Zhang, Hai-Liang; Liu, Xiao-Li; Zhu, Feng-Le; He, Yong

    2014-05-01

    Hyperspectral imaging technology was developed to identify different brand famous green tea based on PCA information and image information fusion. First 512 spectral images of six brands of famous green tea in the 380 approximately 1 023 nm wavelength range were collected and principal component analysis (PCA) was performed with the goal of selecting two characteristic bands (545 and 611 nm) that could potentially be used for classification system. Then, 12 gray level co-occurrence matrix (GLCM) features (i. e., mean, covariance, homogeneity, energy, contrast, correlation, entropy, inverse gap, contrast, difference from the second-order and autocorrelation) based on the statistical moment were extracted from each characteristic band image. Finally, integration of the 12 texture features and three PCA spectral characteristics for each green tea sample were extracted as the input of LS-SVM. Experimental results showed that discriminating rate was 100% in the prediction set. The receiver operating characteristic curve (ROC) assessment methods were used to evaluate the LS-SVM classification algorithm. Overall results sufficiently demonstrate that hyperspectral imaging technology can be used to perform classification of green tea.

  16. Using cellular automata to generate image representation for biological sequences.

    PubMed

    Xiao, X; Shao, S; Ding, Y; Huang, Z; Chen, X; Chou, K-C

    2005-02-01

    A novel approach to visualize biological sequences is developed based on cellular automata (Wolfram, S. Nature 1984, 311, 419-424), a set of discrete dynamical systems in which space and time are discrete. By transforming the symbolic sequence codes into the digital codes, and using some optimal space-time evolvement rules of cellular automata, a biological sequence can be represented by a unique image, the so-called cellular automata image. Many important features, which are originally hidden in a long and complicated biological sequence, can be clearly revealed thru its cellular automata image. With biological sequences entering into databanks rapidly increasing in the post-genomic era, it is anticipated that the cellular automata image will become a very useful vehicle for investigation into their key features, identification of their function, as well as revelation of their "fingerprint". It is anticipated that by using the concept of the pseudo amino acid composition (Chou, K.C. Proteins: Structure, Function, and Genetics, 2001, 43, 246-255), the cellular automata image approach can also be used to improve the quality of predicting protein attributes, such as structural class and subcellular location.

  17. Learning Traffic as Images: A Deep Convolutional Neural Network for Large-Scale Transportation Network Speed Prediction.

    PubMed

    Ma, Xiaolei; Dai, Zhuang; He, Zhengbing; Ma, Jihui; Wang, Yong; Wang, Yunpeng

    2017-04-10

    This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks.

  18. Learning Traffic as Images: A Deep Convolutional Neural Network for Large-Scale Transportation Network Speed Prediction

    PubMed Central

    Ma, Xiaolei; Dai, Zhuang; He, Zhengbing; Ma, Jihui; Wang, Yong; Wang, Yunpeng

    2017-01-01

    This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks. PMID:28394270

  19. A High-Resolution Tile-Based Approach for Classifying Biological Regions in Whole-Slide Histopathological Images

    PubMed Central

    Hoffman, R.A.; Kothari, S.; Phan, J.H.; Wang, M.D.

    2016-01-01

    Computational analysis of histopathological whole slide images (WSIs) has emerged as a potential means for improving cancer diagnosis and prognosis. However, an open issue relating to the automated processing of WSIs is the identification of biological regions such as tumor, stroma, and necrotic tissue on the slide. We develop a method for classifying WSI portions (512x512-pixel tiles) into biological regions by (1) extracting a set of 461 image features from each WSI tile, (2) optimizing tile-level prediction models using nested cross-validation on a small (600 tile) manually annotated tile-level training set, and (3) validating the models against a much larger (1.7x106 tile) data set for which ground truth was available on the whole-slide level. We calculated the predicted prevalence of each tissue region and compared this prevalence to the ground truth prevalence for each image in an independent validation set. Results show significant correlation between the predicted (using automated system) and reported biological region prevalences with p < 0.001 for eight of nine cases considered. PMID:27532012

  20. A High-Resolution Tile-Based Approach for Classifying Biological Regions in Whole-Slide Histopathological Images.

    PubMed

    Hoffman, R A; Kothari, S; Phan, J H; Wang, M D

    Computational analysis of histopathological whole slide images (WSIs) has emerged as a potential means for improving cancer diagnosis and prognosis. However, an open issue relating to the automated processing of WSIs is the identification of biological regions such as tumor, stroma, and necrotic tissue on the slide. We develop a method for classifying WSI portions (512x512-pixel tiles) into biological regions by (1) extracting a set of 461 image features from each WSI tile, (2) optimizing tile-level prediction models using nested cross-validation on a small (600 tile) manually annotated tile-level training set, and (3) validating the models against a much larger (1.7x10 6 tile) data set for which ground truth was available on the whole-slide level. We calculated the predicted prevalence of each tissue region and compared this prevalence to the ground truth prevalence for each image in an independent validation set. Results show significant correlation between the predicted (using automated system) and reported biological region prevalences with p < 0.001 for eight of nine cases considered.

  1. Training of polyp staging systems using mixed imaging modalities.

    PubMed

    Wimmer, Georg; Gadermayr, Michael; Kwitt, Roland; Häfner, Michael; Tamaki, Toru; Yoshida, Shigeto; Tanaka, Shinji; Merhof, Dorit; Uhl, Andreas

    2018-05-04

    In medical image data sets, the number of images is usually quite small. The small number of training samples does not allow to properly train classifiers which leads to massive overfitting to the training data. In this work, we investigate whether increasing the number of training samples by merging datasets from different imaging modalities can be effectively applied to improve predictive performance. Further, we investigate if the extracted features from the employed image representations differ between different imaging modalities and if domain adaption helps to overcome these differences. We employ twelve feature extraction methods to differentiate between non-neoplastic and neoplastic lesions. Experiments are performed using four different classifier training strategies, each with a different combination of training data. The specifically designed setup for these experiments enables a fair comparison between the four training strategies. Combining high definition with high magnification training data and chromoscopic with non-chromoscopic training data partly improved the results. The usage of domain adaptation has only a small effect on the results compared to just using non-adapted training data. Merging datasets from different imaging modalities turned out to be partially beneficial for the case of combining high definition endoscopic data with high magnification endoscopic data and for combining chromoscopic with non-chromoscopic data. NBI and chromoendoscopy on the other hand are mostly too different with respect to the extracted features to combine images of these two modalities for classifier training. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features

    PubMed Central

    Bakas, Spyridon; Akbari, Hamed; Sotiras, Aristeidis; Bilello, Michel; Rozycki, Martin; Kirby, Justin S.; Freymann, John B.; Farahani, Keyvan; Davatzikos, Christos

    2017-01-01

    Gliomas belong to a group of central nervous system tumors, and consist of various sub-regions. Gold standard labeling of these sub-regions in radiographic imaging is essential for both clinical and computational studies, including radiomic and radiogenomic analyses. Towards this end, we release segmentation labels and radiomic features for all pre-operative multimodal magnetic resonance imaging (MRI) (n=243) of the multi-institutional glioma collections of The Cancer Genome Atlas (TCGA), publicly available in The Cancer Imaging Archive (TCIA). Pre-operative scans were identified in both glioblastoma (TCGA-GBM, n=135) and low-grade-glioma (TCGA-LGG, n=108) collections via radiological assessment. The glioma sub-region labels were produced by an automated state-of-the-art method and manually revised by an expert board-certified neuroradiologist. An extensive panel of radiomic features was extracted based on the manually-revised labels. This set of labels and features should enable i) direct utilization of the TCGA/TCIA glioma collections towards repeatable, reproducible and comparative quantitative studies leading to new predictive, prognostic, and diagnostic assessments, as well as ii) performance evaluation of computer-aided segmentation methods, and comparison to our state-of-the-art method. PMID:28872634

  3. Feature reduction and payload location with WAM steganalysis

    NASA Astrophysics Data System (ADS)

    Ker, Andrew D.; Lubenko, Ivans

    2009-02-01

    WAM steganalysis is a feature-based classifier for detecting LSB matching steganography, presented in 2006 by Goljan et al. and demonstrated to be sensitive even to small payloads. This paper makes three contributions to the development of the WAM method. First, we benchmark some variants of WAM in a number of sets of cover images, and we are able to quantify the significance of differences in results between different machine learning algorithms based on WAM features. It turns out that, like many of its competitors, WAM is not effective in certain types of cover, and furthermore it is hard to predict which types of cover are suitable for WAM steganalysis. Second, we demonstrate that only a few the features used in WAM steganalysis do almost all of the work, so that a simplified WAM steganalyser can be constructed in exchange for a little less detection power. Finally, we demonstrate how the WAM method can be extended to provide forensic tools to identify the location (and potentially content) of LSB matching payload, given a number of stego images with payload placed in the same locations. Although easily evaded, this is a plausible situation if the same stego key is mistakenly re-used for embedding in multiple images.

  4. Web-based thyroid imaging reporting and data system: Malignancy risk of atypia of undetermined significance or follicular lesion of undetermined significance thyroid nodules calculated by a combination of ultrasonography features and biopsy results.

    PubMed

    Choi, Young Jun; Baek, Jung Hwan; Shin, Jung Hee; Shim, Woo Hyun; Kim, Seon-Ok; Lee, Won-Hong; Song, Dong Eun; Kim, Tae Yong; Chung, Ki-Wook; Lee, Jeong Hyun

    2018-05-13

    The purpose of this study was to construct a web-based predictive model using ultrasound characteristics and subcategorized biopsy results for thyroid nodules of atypia of undetermined significance/follicular lesion of undetermined significance (AUS/FLUS) to stratify the risk of malignancy. Data included 672 thyroid nodules from 656 patients from a historical cohort. We analyzed ultrasound images of thyroid nodules and biopsy results according to nuclear atypia and architectural atypia. Multivariate logistic regression analysis was performed to predict whether nodules were diagnosed as malignant or benign. The ultrasound features, including spiculated margin, marked hypoechogenicity, calcifications, biopsy results, and cytologic atypia, showed significant differences between groups. A 13-point risk scoring system was developed, and the area under the curve (AUC) of the receiver operating characteristic (ROC) curve of the development and validation sets were 0.837 and 0.830, respectively (http://www.gap.kr/thyroidnodule_b3.php). We devised a web-based predictive model using the combined information of ultrasound characteristics and biopsy results for AUS/FLUS thyroid nodules to stratify the malignant risk. © 2018 Wiley Periodicals, Inc.

  5. Automatic Detection and Classification of Colorectal Polyps by Transferring Low-Level CNN Features From Nonmedical Domain.

    PubMed

    Zhang, Ruikai; Zheng, Yali; Mak, Tony Wing Chung; Yu, Ruoxi; Wong, Sunny H; Lau, James Y W; Poon, Carmen C Y

    2017-01-01

    Colorectal cancer (CRC) is a leading cause of cancer deaths worldwide. Although polypectomy at early stage reduces CRC incidence, 90% of the polyps are small and diminutive, where removal of them poses risks to patients that may outweigh the benefits. Correctly detecting and predicting polyp type during colonoscopy allows endoscopists to resect and discard the tissue without submitting it for histology, saving time, and costs. Nevertheless, human visual observation of early stage polyps varies. Therefore, this paper aims at developing a fully automatic algorithm to detect and classify hyperplastic and adenomatous colorectal polyps. Adenomatous polyps should be removed, whereas distal diminutive hyperplastic polyps are considered clinically insignificant and may be left in situ . A novel transfer learning application is proposed utilizing features learned from big nonmedical datasets with 1.4-2.5 million images using deep convolutional neural network. The endoscopic images we collected for experiment were taken under random lighting conditions, zooming and optical magnification, including 1104 endoscopic nonpolyp images taken under both white-light and narrowband imaging (NBI) endoscopy and 826 NBI endoscopic polyp images, of which 263 images were hyperplasia and 563 were adenoma as confirmed by histology. The proposed method identified polyp images from nonpolyp images in the beginning followed by predicting the polyp histology. When compared with visual inspection by endoscopists, the results of this study show that the proposed method has similar precision (87.3% versus 86.4%) but a higher recall rate (87.6% versus 77.0%) and a higher accuracy (85.9% versus 74.3%). In conclusion, automatic algorithms can assist endoscopists in identifying polyps that are adenomatous but have been incorrectly judged as hyperplasia and, therefore, enable timely resection of these polyps at an early stage before they develop into invasive cancer.

  6. Diagnosis of breast masses from dynamic contrast-enhanced and diffusion-weighted MR: a machine learning approach.

    PubMed

    Cai, Hongmin; Peng, Yanxia; Ou, Caiwen; Chen, Minsheng; Li, Li

    2014-01-01

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is increasingly used for breast cancer diagnosis as supplementary to conventional imaging techniques. Combining of diffusion-weighted imaging (DWI) of morphology and kinetic features from DCE-MRI to improve the discrimination power of malignant from benign breast masses is rarely reported. The study comprised of 234 female patients with 85 benign and 149 malignant lesions. Four distinct groups of features, coupling with pathological tests, were estimated to comprehensively characterize the pictorial properties of each lesion, which was obtained by a semi-automated segmentation method. Classical machine learning scheme including feature subset selection and various classification schemes were employed to build prognostic model, which served as a foundation for evaluating the combined effects of the multi-sided features for predicting of the types of lesions. Various measurements including cross validation and receiver operating characteristics were used to quantify the diagnostic performances of each feature as well as their combination. Seven features were all found to be statistically different between the malignant and the benign groups and their combination has achieved the highest classification accuracy. The seven features include one pathological variable of age, one morphological variable of slope, three texture features of entropy, inverse difference and information correlation, one kinetic feature of SER and one DWI feature of apparent diffusion coefficient (ADC). Together with the selected diagnostic features, various classical classification schemes were used to test their discrimination power through cross validation scheme. The averaged measurements of sensitivity, specificity, AUC and accuracy are 0.85, 0.89, 90.9% and 0.93, respectively. Multi-sided variables which characterize the morphological, kinetic, pathological properties and DWI measurement of ADC can dramatically improve the discriminatory power of breast lesions.

  7. Boost OCR accuracy using iVector based system combination approach

    NASA Astrophysics Data System (ADS)

    Peng, Xujun; Cao, Huaigu; Natarajan, Prem

    2015-01-01

    Optical character recognition (OCR) is a challenging task because most existing preprocessing approaches are sensitive to writing style, writing material, noises and image resolution. Thus, a single recognition system cannot address all factors of real document images. In this paper, we describe an approach to combine diverse recognition systems by using iVector based features, which is a newly developed method in the field of speaker verification. Prior to system combination, document images are preprocessed and text line images are extracted with different approaches for each system, where iVector is transformed from a high-dimensional supervector of each text line and is used to predict the accuracy of OCR. We merge hypotheses from multiple recognition systems according to the overlap ratio and the predicted OCR score of text line images. We present evaluation results on an Arabic document database where the proposed method is compared against the single best OCR system using word error rate (WER) metric.

  8. Estimating False Positive Contamination in Crater Annotations from Citizen Science Data

    NASA Astrophysics Data System (ADS)

    Tar, P. D.; Bugiolacchi, R.; Thacker, N. A.; Gilmour, J. D.

    2017-01-01

    Web-based citizen science often involves the classification of image features by large numbers of minimally trained volunteers, such as the identification of lunar impact craters under the Moon Zoo project. Whilst such approaches facilitate the analysis of large image data sets, the inexperience of users and ambiguity in image content can lead to contamination from false positive identifications. We give an approach, using Linear Poisson Models and image template matching, that can quantify levels of false positive contamination in citizen science Moon Zoo crater annotations. Linear Poisson Models are a form of machine learning which supports predictive error modelling and goodness-of-fits, unlike most alternative machine learning methods. The proposed supervised learning system can reduce the variability in crater counts whilst providing predictive error assessments of estimated quantities of remaining true verses false annotations. In an area of research influenced by human subjectivity, the proposed method provides a level of objectivity through the utilisation of image evidence, guided by candidate crater identifications.

  9. The Low Backscattering Objects Classification in Polsar Image Based on Bag of Words Model Using Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Yang, L.; Shi, L.; Li, P.; Yang, J.; Zhao, L.; Zhao, B.

    2018-04-01

    Due to the forward scattering and block of radar signal, the water, bare soil, shadow, named low backscattering objects (LBOs), often present low backscattering intensity in polarimetric synthetic aperture radar (PolSAR) image. Because the LBOs rise similar backscattering intensity and polarimetric responses, the spectral-based classifiers are inefficient to deal with LBO classification, such as Wishart method. Although some polarimetric features had been exploited to relieve the confusion phenomenon, the backscattering features are still found unstable when the system noise floor varies in the range direction. This paper will introduce a simple but effective scene classification method based on Bag of Words (BoW) model using Support Vector Machine (SVM) to discriminate the LBOs, without relying on any polarimetric features. In the proposed approach, square windows are firstly opened around the LBOs adaptively to determine the scene images, and then the Scale-Invariant Feature Transform (SIFT) points are detected in training and test scenes. The several SIFT features detected are clustered using K-means to obtain certain cluster centers as the visual word lists and scene images are represented using word frequency. At last, the SVM is selected for training and predicting new scenes as some kind of LBOs. The proposed method is executed over two AIRSAR data sets at C band and L band, including water, bare soil and shadow scenes. The experimental results illustrate the effectiveness of the scene method in distinguishing LBOs.

  10. Going Deeper With Contextual CNN for Hyperspectral Image Classification.

    PubMed

    Lee, Hyungtae; Kwon, Heesung

    2017-10-01

    In this paper, we describe a novel deep convolutional neural network (CNN) that is deeper and wider than other existing deep networks for hyperspectral image classification. Unlike current state-of-the-art approaches in CNN-based hyperspectral image classification, the proposed network, called contextual deep CNN, can optimally explore local contextual interactions by jointly exploiting local spatio-spectral relationships of neighboring individual pixel vectors. The joint exploitation of the spatio-spectral information is achieved by a multi-scale convolutional filter bank used as an initial component of the proposed CNN pipeline. The initial spatial and spectral feature maps obtained from the multi-scale filter bank are then combined together to form a joint spatio-spectral feature map. The joint feature map representing rich spectral and spatial properties of the hyperspectral image is then fed through a fully convolutional network that eventually predicts the corresponding label of each pixel vector. The proposed approach is tested on three benchmark data sets: the Indian Pines data set, the Salinas data set, and the University of Pavia data set. Performance comparison shows enhanced classification performance of the proposed approach over the current state-of-the-art on the three data sets.

  11. Modeling first impressions from highly variable facial images.

    PubMed

    Vernon, Richard J W; Sutherland, Clare A M; Young, Andrew W; Hartley, Tom

    2014-08-12

    First impressions of social traits, such as trustworthiness or dominance, are reliably perceived in faces, and despite their questionable validity they can have considerable real-world consequences. We sought to uncover the information driving such judgments, using an attribute-based approach. Attributes (physical facial features) were objectively measured from feature positions and colors in a database of highly variable "ambient" face photographs, and then used as input for a neural network to model factor dimensions (approachability, youthful-attractiveness, and dominance) thought to underlie social attributions. A linear model based on this approach was able to account for 58% of the variance in raters' impressions of previously unseen faces, and factor-attribute correlations could be used to rank attributes by their importance to each factor. Reversing this process, neural networks were then used to predict facial attributes and corresponding image properties from specific combinations of factor scores. In this way, the factors driving social trait impressions could be visualized as a series of computer-generated cartoon face-like images, depicting how attributes change along each dimension. This study shows that despite enormous variation in ambient images of faces, a substantial proportion of the variance in first impressions can be accounted for through linear changes in objectively defined features.

  12. Using Saliency-Weighted Disparity Statistics for Objective Visual Comfort Assessment of Stereoscopic Images

    NASA Astrophysics Data System (ADS)

    Zhang, Wenlan; Luo, Ting; Jiang, Gangyi; Jiang, Qiuping; Ying, Hongwei; Lu, Jing

    2016-06-01

    Visual comfort assessment (VCA) for stereoscopic images is a particularly significant yet challenging task in 3D quality of experience research field. Although the subjective assessment given by human observers is known as the most reliable way to evaluate the experienced visual discomfort, it is time-consuming and non-systematic. Therefore, it is of great importance to develop objective VCA approaches that can faithfully predict the degree of visual discomfort as human beings do. In this paper, a novel two-stage objective VCA framework is proposed. The main contribution of this study is that the important visual attention mechanism of human visual system is incorporated for visual comfort-aware feature extraction. Specifically, in the first stage, we first construct an adaptive 3D visual saliency detection model to derive saliency map of a stereoscopic image, and then a set of saliency-weighted disparity statistics are computed and combined to form a single feature vector to represent a stereoscopic image in terms of visual comfort. In the second stage, a high dimensional feature vector is fused into a single visual comfort score by performing random forest algorithm. Experimental results on two benchmark databases confirm the superior performance of the proposed approach.

  13. Synaptic plasticity in a cerebellum-like structure depends on temporal order

    NASA Astrophysics Data System (ADS)

    Bell, Curtis C.; Han, Victor Z.; Sugawara, Yoshiko; Grant, Kirsty

    1997-05-01

    Cerebellum-like structures in fish appear to act as adaptive sensory processors, in which learned predictions about sensory input are generated and subtracted from actual sensory input, allowing unpredicted inputs to stand out1-3. Pairing sensory input with centrally originating predictive signals, such as corollary discharge signals linked to motor commands, results in neural responses to the predictive signals alone that are Negative images' of the previously paired sensory responses. Adding these 'negative images' to actual sensory inputs minimizes the neural response to predictable sensory features. At the cellular level, sensory input is relayed to the basal region of Purkinje-like cells, whereas predictive signals are relayed by parallel fibres to the apical dendrites of the same cells4. The generation of negative images could be explained by plasticity at parallel fibre synapses5-7. We show here that such plasticity exists in the electrosensory lobe of mormyrid electric fish and that it has the necessary properties for such a model: it is reversible, anti-hebbian (excitatory postsynaptic potentials (EPSPs) are depressed after pairing with a postsynaptic spike) and tightly dependent on the sequence of pre- and postsynaptic events, with depression occurring only if the postsynaptic spike follows EPSP onset within 60 ms.

  14. Prediction of firmness and soluble solids content of blueberries using hyperspectral reflectance imaging

    USDA-ARS?s Scientific Manuscript database

    Currently, blueberries are inspected and sorted by color, size and/or firmness (or softness) in packinghouses, using different inspection techniques like machine vision and mechanical vibration or impact. A new inspection technique is needed for effectively assessing both external features and inter...

  15. Preoperative Computerized Tomography and Magnetic Resonance Imaging of the Pancreas Predicts Pancreatic Mass and Functional Outcomes After Total Pancreatectomy and Islet Autotransplant.

    PubMed

    Young, Michael C; Theis, Jake R; Hodges, James S; Dunn, Ty B; Pruett, Timothy L; Chinnakotla, Srinath; Walker, Sidney P; Freeman, Martin L; Trikudanathan, Guru; Arain, Mustafa; Robertson, Paul R; Wilhelm, Joshua J; Schwarzenberg, Sarah J; Bland, Barbara; Beilman, Gregory J; Bellin, Melena D

    2016-08-01

    Approximately two thirds of patients will remain on insulin therapy after total pancreatectomy with islet autotransplant (TPIAT) for chronic pancreatitis. We investigated the relationship between measured pancreas volume on computerized tomography or magnetic resonance imaging and features of chronic pancreatitis on imaging, with subsequent islet isolation and diabetes outcomes. Computerized tomography or magnetic resonance imaging was reviewed for pancreas volume (Vitrea software) and presence or absence of calcifications, atrophy, and dilated pancreatic duct in 97 patients undergoing TPIAT. Relationship between these features and (1) islet mass isolated and (2) diabetes status at 1-year post-TPIAT were evaluated. Pancreas volume correlated with islet mass measured as total islet equivalents (r = 0.50, P < 0.0001). Mean islet equivalents were reduced by more than half if any one of calcifications, atrophy, or ductal dilatation were observed. Pancreatic calcifications increased the odds of insulin dependence 4.0 fold (1.1, 15). Collectively, the pancreas volume and 3 imaging features strongly associated with 1-year insulin use (P = 0.07), islet graft failure (P = 0.003), hemoglobin A1c (P = 0.0004), fasting glucose (P = 0.027), and fasting C-peptide level (P = 0.008). Measures of pancreatic parenchymal destruction on imaging, including smaller pancreas volume and calcifications, associate strongly with impaired islet mass and 1-year diabetes outcomes.

  16. Contextual Interactions in Grating Plaid Configurations Are Explained by Natural Image Statistics and Neural Modeling

    PubMed Central

    Ernst, Udo A.; Schiffer, Alina; Persike, Malte; Meinhardt, Günter

    2016-01-01

    Processing natural scenes requires the visual system to integrate local features into global object descriptions. To achieve coherent representations, the human brain uses statistical dependencies to guide weighting of local feature conjunctions. Pairwise interactions among feature detectors in early visual areas may form the early substrate of these local feature bindings. To investigate local interaction structures in visual cortex, we combined psychophysical experiments with computational modeling and natural scene analysis. We first measured contrast thresholds for 2 × 2 grating patch arrangements (plaids), which differed in spatial frequency composition (low, high, or mixed), number of grating patch co-alignments (0, 1, or 2), and inter-patch distances (1° and 2° of visual angle). Contrast thresholds for the different configurations were compared to the prediction of probability summation (PS) among detector families tuned to the four retinal positions. For 1° distance the thresholds for all configurations were larger than predicted by PS, indicating inhibitory interactions. For 2° distance, thresholds were significantly lower compared to PS when the plaids were homogeneous in spatial frequency and orientation, but not when spatial frequencies were mixed or there was at least one misalignment. Next, we constructed a neural population model with horizontal laminar structure, which reproduced the detection thresholds after adaptation of connection weights. Consistent with prior work, contextual interactions were medium-range inhibition and long-range, orientation-specific excitation. However, inclusion of orientation-specific, inhibitory interactions between populations with different spatial frequency preferences were crucial for explaining detection thresholds. Finally, for all plaid configurations we computed their likelihood of occurrence in natural images. The likelihoods turned out to be inversely related to the detection thresholds obtained at larger inter-patch distances. However, likelihoods were almost independent of inter-patch distance, implying that natural image statistics could not explain the crowding-like results at short distances. This failure of natural image statistics to resolve the patch distance modulation of plaid visibility remains a challenge to the approach. PMID:27757076

  17. Video Extrapolation Method Based on Time-Varying Energy Optimization and CIP.

    PubMed

    Sakaino, Hidetomo

    2016-09-01

    Video extrapolation/prediction methods are often used to synthesize new videos from images. For fluid-like images and dynamic textures as well as moving rigid objects, most state-of-the-art video extrapolation methods use non-physics-based models that learn orthogonal bases from a number of images but at high computation cost. Unfortunately, data truncation can cause image degradation, i.e., blur, artifact, and insufficient motion changes. To extrapolate videos that more strictly follow physical rules, this paper proposes a physics-based method that needs only a few images and is truncation-free. We utilize physics-based equations with image intensity and velocity: optical flow, Navier-Stokes, continuity, and advection equations. These allow us to use partial difference equations to deal with the local image feature changes. Image degradation during extrapolation is minimized by updating model parameters, where a novel time-varying energy balancer model that uses energy based image features, i.e., texture, velocity, and edge. Moreover, the advection equation is discretized by high-order constrained interpolation profile for lower quantization error than can be achieved by the previous finite difference method in long-term videos. Experiments show that the proposed energy based video extrapolation method outperforms the state-of-the-art video extrapolation methods in terms of image quality and computation cost.

  18. Imaging brain tumour microstructure.

    PubMed

    Nilsson, Markus; Englund, Elisabet; Szczepankiewicz, Filip; van Westen, Danielle; Sundgren, Pia C

    2018-05-08

    Imaging is an indispensable tool for brain tumour diagnosis, surgical planning, and follow-up. Definite diagnosis, however, often demands histopathological analysis of microscopic features of tissue samples, which have to be obtained by invasive means. A non-invasive alternative may be to probe corresponding microscopic tissue characteristics by MRI, or so called 'microstructure imaging'. The promise of microstructure imaging is one of 'virtual biopsy' with the goal to offset the need for invasive procedures in favour of imaging that can guide pre-surgical planning and can be repeated longitudinally to monitor and predict treatment response. The exploration of such methods is motivated by the striking link between parameters from MRI and tumour histology, for example the correlation between the apparent diffusion coefficient and cellularity. Recent microstructure imaging techniques probe even more subtle and specific features, providing parameters associated to cell shape, size, permeability, and volume distributions. However, the range of scenarios in which these techniques provide reliable imaging biomarkers that can be used to test medical hypotheses or support clinical decisions is yet unknown. Accurate microstructure imaging may moreover require acquisitions that go beyond conventional data acquisition strategies. This review covers a wide range of candidate microstructure imaging methods based on diffusion MRI and relaxometry, and explores advantages, challenges, and potential pitfalls in brain tumour microstructure imaging. Copyright © 2018. Published by Elsevier Inc.

  19. Deep Spatial-Temporal Joint Feature Representation for Video Object Detection.

    PubMed

    Zhao, Baojun; Zhao, Boya; Tang, Linbo; Han, Yuqi; Wang, Wenzheng

    2018-03-04

    With the development of deep neural networks, many object detection frameworks have shown great success in the fields of smart surveillance, self-driving cars, and facial recognition. However, the data sources are usually videos, and the object detection frameworks are mostly established on still images and only use the spatial information, which means that the feature consistency cannot be ensured because the training procedure loses temporal information. To address these problems, we propose a single, fully-convolutional neural network-based object detection framework that involves temporal information by using Siamese networks. In the training procedure, first, the prediction network combines the multiscale feature map to handle objects of various sizes. Second, we introduce a correlation loss by using the Siamese network, which provides neighboring frame features. This correlation loss represents object co-occurrences across time to aid the consistent feature generation. Since the correlation loss should use the information of the track ID and detection label, our video object detection network has been evaluated on the large-scale ImageNet VID dataset where it achieves a 69.5% mean average precision (mAP).

  20. Yarn-dyed fabric defect classification based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Jing, Junfeng; Dong, Amei; Li, Pengfei

    2017-07-01

    Considering that the manual inspection of the yarn-dyed fabric can be time consuming and less efficient, a convolutional neural network (CNN) solution based on the modified AlexNet structure for the classification of the yarn-dyed fabric defect is proposed. CNN has powerful ability of feature extraction and feature fusion which can simulate the learning mechanism of the human brain. In order to enhance computational efficiency and detection accuracy, the local response normalization (LRN) layers in AlexNet are replaced by the batch normalization (BN) layers. In the process of the network training, through several convolution operations, the characteristics of the image are extracted step by step, and the essential features of the image can be obtained from the edge features. And the max pooling layers, the dropout layers, the fully connected layers are also employed in the classification model to reduce the computation cost and acquire more precise features of fabric defect. Finally, the results of the defect classification are predicted by the softmax function. The experimental results show the capability of defect classification via the modified Alexnet model and indicate its robustness.

  1. Texture analysis for survival prediction of pancreatic ductal adenocarcinoma patients with neoadjuvant chemotherapy

    NASA Astrophysics Data System (ADS)

    Chakraborty, Jayasree; Langdon-Embry, Liana; Escalon, Joanna G.; Allen, Peter J.; Lowery, Maeve A.; O'Reilly, Eileen M.; Do, Richard K. G.; Simpson, Amber L.

    2016-03-01

    Pancreatic ductal adenocarcinoma (PDAC) is the fourth leading cause of cancer-related death in the United States. The five-year survival rate for all stages is approximately 6%, and approximately 2% when presenting with distant disease.1 Only 10-20% of all patients present with resectable disease, but recurrence rates are high with only 5 to 15% remaining free of disease at 5 years. At this time, we are unable to distinguish between resectable PDAC patients with occult metastatic disease from those with potentially curable disease. Early classification of these tumor types may eventually lead to changes in initial management including the use of neoadjuvant chemotherapy or radiation, or in the choice of postoperative adjuvant treatments. Texture analysis is an emerging methodology in oncologic imaging for quantitatively assessing tumor heterogeneity that could potentially aid in the stratification of these patients. The present study derives several texture-based features from CT images of PDAC patients, acquired prior to neoadjuvant chemotherapy, and analyzes their performance, individually as well as in combination, as prognostic markers. A fuzzy minimum redundancy maximum relevance method with leave-one-image-out technique is included to select discriminating features from the set of extracted features. With a naive Bayes classifier, the proposed method predicts the 5-year overall survival of PDAC patients prior to neoadjuvant therapy and achieves the best results in terms of the area under the receiver operating characteristic curve of 0:858 and accuracy of 83:0% with four-fold cross-validation techniques.

  2. Longitudinal Changes in Tear Evaporation Rates After Eyelid Warming Therapies in Meibomian Gland Dysfunction.

    PubMed

    Yeo, Sharon; Tan, Jen Hong; Acharya, U Rajendra; Sudarshan, Vidya K; Tong, Louis

    2016-04-01

    Lid warming is the major treatment for meibomian gland dysfunction (MGD). The purpose of the study was to determine the longitudinal changes of tear evaporation after lid warming in patients with MGD. Ninety patients with MGD were enrolled from a dry eye clinic at Singapore National Eye Center in an interventional trial. Participants were treated with hot towel (n = 22), EyeGiene (n = 22), or Blephasteam (n = 22) twice daily or a single 12-minute session of Lipiflow (n = 24). Ocular surface infrared thermography was performed at baseline and 4 and 12 weeks after the treatment, and image features were extracted from the captured images. The baseline of conjunctival tear evaporation (TE) rate (n = 90) was 66.1 ± 21.1 W/min. The rates were not significantly different between sexes, ages, symptom severities, tear breakup times, Schirmer test, corneal fluorescein staining, or treatment groups. Using a general linear model (repeat measures), the conjunctival TE rate was reduced with time after treatment. A higher baseline evaporation rate (≥ 66 W/min) was associated with greater reduction of evaporation rate after treatment. Seven of 10 thermography features at baseline were predictive of reduction in irritative symptoms after treatment. Conjunctival TE rates can be effectively reduced by lid warming treatment in some MGD patients. Individual baseline thermography image features can be predictive of the response to lid warming therapy. For patients that do not have excessive TE, additional therapy, for example, anti-inflammatory therapy, may be required.

  3. Learning receptive fields using predictive feedback.

    PubMed

    Jehee, Janneke F M; Rothkopf, Constantin; Beck, Jeffrey M; Ballard, Dana H

    2006-01-01

    Previously, it was suggested that feedback connections from higher- to lower-level areas carry predictions of lower-level neural activities, whereas feedforward connections carry the residual error between the predictions and the actual lower-level activities [Rao, R.P.N., Ballard, D.H., 1999. Nature Neuroscience 2, 79-87.]. A computational model implementing the hypothesis learned simple cell receptive fields when exposed to natural images. Here, we use predictive feedback to explain tuning properties in medial superior temporal area (MST). We implement the hypothesis using a new, biologically plausible, algorithm based on matching pursuit, which retains all the features of the previous implementation, including its ability to efficiently encode input. When presented with natural images, the model developed receptive field properties as found in primary visual cortex. In addition, when exposed to visual motion input resulting from movements through space, the model learned receptive field properties resembling those in MST. These results corroborate the idea that predictive feedback is a general principle used by the visual system to efficiently encode natural input.

  4. Prediction of Response to Neoadjuvant Chemotherapy and Radiation Therapy with Baseline and Restaging 18F-FDG PET Imaging Biomarkers in Patients with Esophageal Cancer.

    PubMed

    Beukinga, Roelof J; Hulshoff, Jan Binne; Mul, Véronique E M; Noordzij, Walter; Kats-Ugurlu, Gursah; Slart, Riemer H J A; Plukker, John T M

    2018-06-01

    Purpose To assess the value of baseline and restaging fluorine 18 ( 18 F) fluorodeoxyglucose (FDG) positron emission tomography (PET) radiomics in predicting pathologic complete response to neoadjuvant chemotherapy and radiation therapy (NCRT) in patients with locally advanced esophageal cancer. Materials and Methods In this retrospective study, 73 patients with histologic analysis-confirmed T1/N1-3/M0 or T2-4a/N0-3/M0 esophageal cancer were treated with NCRT followed by surgery (Chemoradiotherapy for Esophageal Cancer followed by Surgery Study regimen) between October 2014 and August 2017. Clinical variables and radiomic features from baseline and restaging 18 F-FDG PET were selected by univariable logistic regression and least absolute shrinkage and selection operator. The selected variables were used to fit a multivariable logistic regression model, which was internally validated by using bootstrap resampling with 20 000 replicates. The performance of this model was compared with reference prediction models composed of maximum standardized uptake value metrics, clinical variables, and maximum standardized uptake value at baseline NCRT radiomic features. Outcome was defined as complete versus incomplete pathologic response (tumor regression grade 1 vs 2-5 according to the Mandard classification). Results Pathologic response was complete in 16 patients (21.9%) and incomplete in 57 patients (78.1%). A prediction model combining clinical T-stage and restaging NCRT (post-NCRT) joint maximum (quantifying image orderliness) yielded an optimism-corrected area under the receiver operating characteristics curve of 0.81. Post-NCRT joint maximum was replaceable with five other redundant post-NCRT radiomic features that provided equal model performance. All reference prediction models exhibited substantially lower discriminatory accuracy. Conclusion The combination of clinical T-staging and quantitative assessment of post-NCRT 18 F-FDG PET orderliness (joint maximum) provided high discriminatory accuracy in predicting pathologic complete response in patients with esophageal cancer. © RSNA, 2018 Online supplemental material is available for this article.

  5. Hyperspectral Imaging Analysis for the Classification of Soil Types and the Determination of Soil Total Nitrogen

    PubMed Central

    Jia, Shengyao; Li, Hongyang; Wang, Yanjie; Tong, Renyuan; Li, Qing

    2017-01-01

    Soil is an important environment for crop growth. Quick and accurately access to soil nutrient content information is a prerequisite for scientific fertilization. In this work, hyperspectral imaging (HSI) technology was applied for the classification of soil types and the measurement of soil total nitrogen (TN) content. A total of 183 soil samples collected from Shangyu City (People’s Republic of China), were scanned by a near-infrared hyperspectral imaging system with a wavelength range of 874–1734 nm. The soil samples belonged to three major soil types typical of this area, including paddy soil, red soil and seashore saline soil. The successive projections algorithm (SPA) method was utilized to select effective wavelengths from the full spectrum. Pattern texture features (energy, contrast, homogeneity and entropy) were extracted from the gray-scale images at the effective wavelengths. The support vector machines (SVM) and partial least squares regression (PLSR) methods were used to establish classification and prediction models, respectively. The results showed that by using the combined data sets of effective wavelengths and texture features for modelling an optimal correct classification rate of 91.8%. could be achieved. The soil samples were first classified, then the local models were established for soil TN according to soil types, which achieved better prediction results than the general models. The overall results indicated that hyperspectral imaging technology could be used for soil type classification and soil TN determination, and data fusion combining spectral and image texture information showed advantages for the classification of soil types. PMID:28974005

  6. A knowledge-based framework for image enhancement in aviation security.

    PubMed

    Singh, Maneesha; Singh, Sameer; Partridge, Derek

    2004-12-01

    The main aim of this paper is to present a knowledge-based framework for automatically selecting the best image enhancement algorithm from several available on a per image basis in the context of X-ray images of airport luggage. The approach detailed involves a system that learns to map image features that represent its viewability to one or more chosen enhancement algorithms. Viewability measures have been developed to provide an automatic check on the quality of the enhanced image, i.e., is it really enhanced? The choice is based on ground-truth information generated by human X-ray screening experts. Such a system, for a new image, predicts the best-suited enhancement algorithm. Our research details the various characteristics of the knowledge-based system and shows extensive results on real images.

  7. Imaging normal pressure hydrocephalus: theories, techniques, and challenges.

    PubMed

    Keong, Nicole C H; Pena, Alonso; Price, Stephen J; Czosnyka, Marek; Czosnyka, Zofia; Pickard, John D

    2016-09-01

    The pathophysiology of NPH continues to provoke debate. Although guidelines and best-practice recommendations are well established, there remains a lack of consensus about the role of individual imaging modalities in characterizing specific features of the condition and predicting the success of CSF shunting. Variability of clinical presentation and imperfect responsiveness to shunting are obstacles to the application of novel imaging techniques. Few studies have sought to interpret imaging findings in the context of theories of NPH pathogenesis. In this paper, the authors discuss the major streams of thought for the evolution of NPH and the relevance of key imaging studies contributing to the understanding of the pathophysiology of this complex condition.

  8. Prediction of near-term breast cancer risk using local region-based bilateral asymmetry features in mammography

    NASA Astrophysics Data System (ADS)

    Li, Yane; Fan, Ming; Li, Lihua; Zheng, Bin

    2017-03-01

    This study proposed a near-term breast cancer risk assessment model based on local region bilateral asymmetry features in Mammography. The database includes 566 cases who underwent at least two sequential FFDM examinations. The `prior' examination in the two series all interpreted as negative (not recalled). In the "current" examination, 283 women were diagnosed cancers and 283 remained negative. Age of cancers and negative cases completely matched. These cases were divided into three subgroups according to age: 152 cases among the 37-49 age-bracket, 220 cases in the age-bracket 50- 60, and 194 cases with the 61-86 age-bracket. For each image, two local regions including strip-based regions and difference-of-Gaussian basic element regions were segmented. After that, structural variation features among pixel values and structural similarity features were computed for strip regions. Meanwhile, positional features were extracted for basic element regions. The absolute subtraction value was computed between each feature of the left and right local-regions. Next, a multi-layer perception classifier was implemented to assess performance of features for prediction. Features were then selected according stepwise regression analysis. The AUC achieved 0.72, 0.75 and 0.71 for these 3 age-based subgroups, respectively. The maximum adjustable odds ratios were 12.4, 20.56 and 4.91 for these three groups, respectively. This study demonstrate that the local region-based bilateral asymmetry features extracted from CC-view mammography could provide useful information to predict near-term breast cancer risk.

  9. Molecular Imaging and Precision Medicine in Breast Cancer.

    PubMed

    Chudgar, Amy V; Mankoff, David A

    2017-01-01

    Precision medicine, basing treatment approaches on patient traits and specific molecular features of disease processes, has an important role in the management of patients with breast cancer as targeted therapies continue to improve. PET imaging offers noninvasive information that is complementary to traditional tissue biomarkers, including information about tumor burden, tumor metabolism, receptor status, and proliferation. Several PET agents that image breast cancer receptors can visually demonstrate the extent and heterogeneity of receptor-positive disease and help predict which tumors are likely to respond to targeted treatments. This review presents applications of PET imaging in the targeted treatment of breast cancer. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Image Discrimination Models With Stochastic Channel Selection

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert J., Jr.; Beard, Bettina L.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    Many models of human image processing feature a large fixed number of channels representing cortical units varying in spatial position (visual field direction and eccentricity) and spatial frequency (radial frequency and orientation). The values of these parameters are usually sampled at fixed values selected to ensure adequate overlap considering the bandwidth and/or spread parameters, which are usually fixed. Even high levels of overlap does not always ensure that the performance of the model will vary smoothly with image translation or scale changes. Physiological measurements of bandwidth and/or spread parameters result in a broad distribution of estimated parameter values and the prediction of some psychophysical results are facilitated by the assumption that these parameters also take on a range of values. Selecting a sample of channels from a continuum of channels rather than using a fixed set can make model performance vary smoothly with changes in image position, scale, and orientation. It also facilitates the addition of spatial inhomogeneity, nonlinear feature channels, and focus of attention to channel models.

  11. Biomechanical modelling for breast image registration

    NASA Astrophysics Data System (ADS)

    Lee, Angela; Rajagopal, Vijay; Chung, Jae-Hoon; Bier, Peter; Nielsen, Poul M. F.; Nash, Martyn P.

    2008-03-01

    Breast cancer is a leading cause of death in women. Tumours are usually detected by palpation or X-ray mammography followed by further imaging, such as magnetic resonance imaging (MRI) or ultrasound. The aim of this research is to develop a biophysically-based computational tool that will allow accurate collocation of features (such as suspicious lesions) across multiple imaging views and modalities in order to improve clinicians' diagnosis of breast cancer. We have developed a computational framework for generating individual-specific, 3D finite element models of the breast. MR images were obtained of the breast under gravity loading and neutrally buoyant conditions. Neutrally buoyant breast images, obtained whilst immersing the breast in water, were used to estimate the unloaded geometry of the breast (for present purposes, we have assumed that the densities of water and breast tissue are equal). These images were segmented to isolate the breast tissues, and a tricubic Hermite finite element mesh was fitted to the digitised data points in order to produce a customized breast model. The model was deformed, in accordance with finite deformation elasticity theory, to predict the gravity loaded state of the breast in the prone position. The unloaded breast images were embedded into the reference model and warped based on the predicted deformation. In order to analyse the accuracy of the model predictions, the cross-correlation image comparison metric was used to compare the warped, resampled images with the clinical images of the prone gravity loaded state. We believe that a biomechanical image registration tool of this kind will aid radiologists to provide more reliable diagnosis and localisation of breast cancer.

  12. Texture analysis of medical images for radiotherapy applications

    PubMed Central

    Rizzo, Giovanna

    2017-01-01

    The high-throughput extraction of quantitative information from medical images, known as radiomics, has grown in interest due to the current necessity to quantitatively characterize tumour heterogeneity. In this context, texture analysis, consisting of a variety of mathematical techniques that can describe the grey-level patterns of an image, plays an important role in assessing the spatial organization of different tissues and organs. For these reasons, the potentiality of texture analysis in the context of radiotherapy has been widely investigated in several studies, especially for the prediction of the treatment response of tumour and normal tissues. Nonetheless, many different factors can affect the robustness, reproducibility and reliability of textural features, thus limiting the impact of this technique. In this review, an overview of the most recent works that have applied texture analysis in the context of radiotherapy is presented, with particular focus on the assessment of tumour and tissue response to radiations. Preliminary, the main factors that have an influence on features estimation are discussed, highlighting the need of more standardized image acquisition and reconstruction protocols and more accurate methods for region of interest identification. Despite all these limitations, texture analysis is increasingly demonstrating its ability to improve the characterization of intratumour heterogeneity and the prediction of clinical outcome, although prospective studies and clinical trials are required to draw a more complete picture of the full potential of this technique. PMID:27885836

  13. Molecular classification of patients with grade II/III glioma using quantitative MRI characteristics.

    PubMed

    Bahrami, Naeim; Hartman, Stephen J; Chang, Yu-Hsuan; Delfanti, Rachel; White, Nathan S; Karunamuni, Roshan; Seibert, Tyler M; Dale, Anders M; Hattangadi-Gluth, Jona A; Piccioni, David; Farid, Nikdokht; McDonald, Carrie R

    2018-06-02

    Molecular markers of WHO grade II/III glioma are known to have important prognostic and predictive implications and may be associated with unique imaging phenotypes. The purpose of this study is to determine whether three clinically relevant molecular markers identified in gliomas-IDH, 1p/19q, and MGMT status-show distinct quantitative MRI characteristics on FLAIR imaging. Sixty-one patients with grade II/III gliomas who had molecular data and MRI available prior to radiation were included. Quantitative MRI features were extracted that measured tissue heterogeneity (homogeneity and pixel correlation) and FLAIR border distinctiveness (edge contrast; EC). T-tests were conducted to determine whether patients with different genotypes differ across the features. Logistic regression with LASSO regularization was used to determine the optimal combination of MRI and clinical features for predicting molecular subtypes. Patients with IDH wildtype tumors showed greater signal heterogeneity (p = 0.001) and lower EC (p = 0.008) within the FLAIR region compared to IDH mutant tumors. Among patients with IDH mutant tumors, 1p/19q co-deleted tumors had greater signal heterogeneity (p = 0.002) and lower EC (p = 0.005) compared to 1p/19q intact tumors. MGMT methylated tumors showed lower EC (p = 0.03) compared to the unmethylated group. The combination of FLAIR border distinctness, heterogeneity, and pixel correlation optimally classified tumors by IDH status. Quantitative imaging characteristics of FLAIR heterogeneity and border pattern in grade II/III gliomas may provide unique information for determining molecular status at time of initial diagnostic imaging, which may then guide subsequent surgical and medical management.

  14. Zone-size nonuniformity of 18F-FDG PET regional textural features predicts survival in patients with oropharyngeal cancer.

    PubMed

    Cheng, Nai-Ming; Fang, Yu-Hua Dean; Lee, Li-yu; Chang, Joseph Tung-Chieh; Tsan, Din-Li; Ng, Shu-Hang; Wang, Hung-Ming; Liao, Chun-Ta; Yang, Lan-Yan; Hsu, Ching-Han; Yen, Tzu-Chen

    2015-03-01

    The question as to whether the regional textural features extracted from PET images predict prognosis in oropharyngeal squamous cell carcinoma (OPSCC) remains open. In this study, we investigated the prognostic impact of regional heterogeneity in patients with T3/T4 OPSCC. We retrospectively reviewed the records of 88 patients with T3 or T4 OPSCC who had completed primary therapy. Progression-free survival (PFS) and disease-specific survival (DSS) were the main outcome measures. In an exploratory analysis, a standardized uptake value of 2.5 (SUV 2.5) was taken as the cut-off value for the detection of tumour boundaries. A fixed threshold at 42 % of the maximum SUV (SUVmax 42 %) and an adaptive threshold method were then used for validation. Regional textural features were extracted from pretreatment (18)F-FDG PET/CT images using the grey-level run length encoding method and grey-level size zone matrix. The prognostic significance of PET textural features was examined using receiver operating characteristic (ROC) curves and Cox regression analysis. Zone-size nonuniformity (ZSNU) was identified as an independent predictor of PFS and DSS. Its prognostic impact was confirmed using both the SUVmax 42 % and the adaptive threshold segmentation methods. Based on (1) total lesion glycolysis, (2) uniformity (a local scale texture parameter), and (3) ZSNU, we devised a prognostic stratification system that allowed the identification of four distinct risk groups. The model combining the three prognostic parameters showed a higher predictive value than each variable alone. ZSNU is an independent predictor of outcome in patients with advanced T-stage OPSCC, and may improve their prognostic stratification.

  15. Nonpolitical images evoke neural predictors of political ideology.

    PubMed

    Ahn, Woo-Young; Kishida, Kenneth T; Gu, Xiaosi; Lohrenz, Terry; Harvey, Ann; Alford, John R; Smith, Kevin B; Yaffe, Gideon; Hibbing, John R; Dayan, Peter; Montague, P Read

    2014-11-17

    Political ideologies summarize dimensions of life that define how a person organizes their public and private behavior, including their attitudes associated with sex, family, education, and personal autonomy. Despite the abstract nature of such sensibilities, fundamental features of political ideology have been found to be deeply connected to basic biological mechanisms that may serve to defend against environmental challenges like contamination and physical threat. These results invite the provocative claim that neural responses to nonpolitical stimuli (like contaminated food or physical threats) should be highly predictive of abstract political opinions (like attitudes toward gun control and abortion). We applied a machine-learning method to fMRI data to test the hypotheses that brain responses to emotionally evocative images predict individual scores on a standard political ideology assay. Disgusting images, especially those related to animal-reminder disgust (e.g., mutilated body), generate neural responses that are highly predictive of political orientation even though these neural predictors do not agree with participants' conscious rating of the stimuli. Images from other affective categories do not support such predictions. Remarkably, brain responses to a single disgusting stimulus were sufficient to make accurate predictions about an individual subject's political ideology. These results provide strong support for the idea that fundamental neural processing differences that emerge under the challenge of emotionally evocative stimuli may serve to structure political beliefs in ways formerly unappreciated. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Characterizing trabecular bone structure for assessing vertebral fracture risk on volumetric quantitative computed tomography

    NASA Astrophysics Data System (ADS)

    Nagarajan, Mahesh B.; Checefsky, Walter A.; Abidin, Anas Z.; Tsai, Halley; Wang, Xixi; Hobbs, Susan K.; Bauer, Jan S.; Baum, Thomas; Wismüller, Axel

    2015-03-01

    While the proximal femur is preferred for measuring bone mineral density (BMD) in fracture risk estimation, the introduction of volumetric quantitative computed tomography has revealed stronger associations between BMD and spinal fracture status. In this study, we propose to capture properties of trabecular bone structure in spinal vertebrae with advanced second-order statistical features for purposes of fracture risk assessment. For this purpose, axial multi-detector CT (MDCT) images were acquired from 28 spinal vertebrae specimens using a whole-body 256-row CT scanner with a dedicated calibration phantom. A semi-automated method was used to annotate the trabecular compartment in the central vertebral slice with a circular region of interest (ROI) to exclude cortical bone; pixels within were converted to values indicative of BMD. Six second-order statistical features derived from gray-level co-occurrence matrices (GLCM) and the mean BMD within the ROI were then extracted and used in conjunction with a generalized radial basis functions (GRBF) neural network to predict the failure load of the specimens; true failure load was measured through biomechanical testing. Prediction performance was evaluated with a root-mean-square error (RMSE) metric. The best prediction performance was observed with GLCM feature `correlation' (RMSE = 1.02 ± 0.18), which significantly outperformed all other GLCM features (p < 0.01). GLCM feature correlation also significantly outperformed MDCTmeasured mean BMD (RMSE = 1.11 ± 0.17) (p< 10-4). These results suggest that biomechanical strength prediction in spinal vertebrae can be significantly improved through characterization of trabecular bone structure with GLCM-derived texture features.

  17. A model of proto-object based saliency

    PubMed Central

    Russell, Alexander F.; Mihalaş, Stefan; von der Heydt, Rudiger; Niebur, Ernst; Etienne-Cummings, Ralph

    2013-01-01

    Organisms use the process of selective attention to optimally allocate their computational resources to the instantaneously most relevant subsets of a visual scene, ensuring that they can parse the scene in real time. Many models of bottom-up attentional selection assume that elementary image features, like intensity, color and orientation, attract attention. Gestalt psychologists, how-ever, argue that humans perceive whole objects before they analyze individual features. This is supported by recent psychophysical studies that show that objects predict eye-fixations better than features. In this report we present a neurally inspired algorithm of object based, bottom-up attention. The model rivals the performance of state of the art non-biologically plausible feature based algorithms (and outperforms biologically plausible feature based algorithms) in its ability to predict perceptual saliency (eye fixations and subjective interest points) in natural scenes. The model achieves this by computing saliency as a function of proto-objects that establish the perceptual organization of the scene. All computational mechanisms of the algorithm have direct neural correlates, and our results provide evidence for the interface theory of attention. PMID:24184601

  18. Textural features of 18F-FDG PET after two cycles of neoadjuvant chemotherapy can predict pCR in patients with locally advanced breast cancer.

    PubMed

    Cheng, Lin; Zhang, Jianping; Wang, Yujie; Xu, Xiaoli; Zhang, Yongping; Zhang, Yingjian; Liu, Guangyu; Cheng, Jingyi

    2017-08-01

    This study was designed to evaluate the utility of textural features for predicting pathological complete response (pCR) to neoadjuvant chemotherapy (NAC). Sixty-one consecutive patients with locally advanced breast cancer underwent 18 F-FDG PET/CT scanning at baseline and after the second course of NAC. Changes to imaging parameters [maximum standardized uptake value (SUV max ), metabolic tumor volume (MTV), total lesion glycolysis (TLG)] and textural features (entropy, coarseness, skewness) between the 2 scans were measured by two independent radiologists. Pathological responses were reviewed by one pathologist, and the significance of the predictive value of each parameter was analyzed using a Chi-squared test. Receiver operating characteristic curve analysis was used to compare the area under the curve (AUC) for each parameter. pCR was observed more often in patients with HER2-positive tumors (22 patients) than in patients with HER2-negative tumors (5 patients) (71.0 vs. 16.7%, p < 0.001). ∆ %SUV max , ∆ %entropy and ∆ %coarseness were significantly useful for differentiating pCR from non-pCR in the HER2-negative group, and the AUCs for these parameters were 0.928, 0.808 and 0.800, respectively (p = 0.003, 0.032 and 0.037). In the HER2-positive group, ∆ %SUV max and ∆ %skewness were moderately useful for predicting pCR, and the respective AUCs were 0.747 and 0.758 (p = 0.033 and 0.026). Although there was no significant difference in the AUCs between groups for these parameters, an additional 3/22 patients in the HER2-positive group with pCR were identified when ∆ %skewness and ∆ %SUV max were considered together (p = 0.031). The absolute values for each parameter before NAC and after 2 cycles cannot predict pCR in our patients. Neither ∆ %MTV nor ∆ %TLG was efficiently predictive of pCR in any group. The early changes in the textural features of 18 F-FDG PET images after two cycles of NAC are predictive of pCR in both HER2-negative and HER2-positive patients; this evidence warrants confirmation by further research.

  19. Validation of Noninvasive In Vivo Compound Ultrasound Strain Imaging Using Histologic Plaque Vulnerability Features.

    PubMed

    Hansen, Hendrik H G; de Borst, Gert Jan; Bots, Michiel L; Moll, Frans L; Pasterkamp, Gerard; de Korte, Chris L

    2016-11-01

    Carotid plaque rupture is a major cause of stroke. Key issue for risk stratification is early identification of rupture-prone plaques. A noninvasive technique, compound ultrasound strain imaging, was developed providing high-resolution radial deformation/strain images of atherosclerotic plaques. This study aims at in vivo validation of compound ultrasound strain imaging in patients by relating the measured strains to typical features of vulnerable plaques derived from histology after carotid endarterectomy. Strains were measured in 34 severely stenotic (>70%) carotid arteries at the culprit lesion site within 48 hours before carotid endarterectomy. In all cases, the lumen-wall boundary was identifiable on B-mode ultrasound, and the imaged cross-section did not move out of the imaging plane from systole to diastole. After endarterectomy, the plaques were processed using a validated histology analysis technique. Locally elevated strain values were observed in regions containing predominantly components related to plaque vulnerability, whereas lower values were observed in fibrous, collagen-rich plaques. The median strain of the inner plaque layer (1 mm thickness) was significantly higher (P<0.01) for (fibro)atheromatous (n=20, strain=0.27%) than that for fibrous plaques (n=14, strain=-0.75%). Also, a significantly larger area percentage of the inner layer revealed strains above 0.5% for (fibro)atheromatous (45.30%) compared with fibrous plaques (31.59%). (Fibro)atheromatous plaques were detected with a sensitivity, specificity, positive predictive value, and negative predictive value of 75%, 86%, 88%, and 71%, respectively. Strain did not significantly correlate with fibrous cap thickness, smooth muscle cell, or macrophage concentration. Compound ultrasound strain imaging allows differentiating (fibro)atheromatous from fibrous carotid artery plaques. © 2016 American Heart Association, Inc.

  20. Deep learning with convolutional neural network in radiology.

    PubMed

    Yasaka, Koichiro; Akai, Hiroyuki; Kunimatsu, Akira; Kiryu, Shigeru; Abe, Osamu

    2018-04-01

    Deep learning with a convolutional neural network (CNN) is gaining attention recently for its high performance in image recognition. Images themselves can be utilized in a learning process with this technique, and feature extraction in advance of the learning process is not required. Important features can be automatically learned. Thanks to the development of hardware and software in addition to techniques regarding deep learning, application of this technique to radiological images for predicting clinically useful information, such as the detection and the evaluation of lesions, etc., are beginning to be investigated. This article illustrates basic technical knowledge regarding deep learning with CNNs along the actual course (collecting data, implementing CNNs, and training and testing phases). Pitfalls regarding this technique and how to manage them are also illustrated. We also described some advanced topics of deep learning, results of recent clinical studies, and the future directions of clinical application of deep learning techniques.

  1. Deep neural network-based domain adaptation for classification of remote sensing images

    NASA Astrophysics Data System (ADS)

    Ma, Li; Song, Jiazhen

    2017-10-01

    We investigate the effectiveness of deep neural network for cross-domain classification of remote sensing images in this paper. In the network, class centroid alignment is utilized as a domain adaptation strategy, making the network able to transfer knowledge from the source domain to target domain on a per-class basis. Since predicted labels of target data should be used to estimate the centroid of each class, we use overall centroid alignment as a coarse domain adaptation method to improve the estimation accuracy. In addition, rectified linear unit is used as the activation function to produce sparse features, which may improve the separation capability. The proposed network can provide both aligned features and an adaptive classifier, as well as obtain label-free classification of target domain data. The experimental results using Hyperion, NCALM, and WorldView-2 remote sensing images demonstrated the effectiveness of the proposed approach.

  2. Diverse Region-Based CNN for Hyperspectral Image Classification.

    PubMed

    Zhang, Mengmeng; Li, Wei; Du, Qian

    2018-06-01

    Convolutional neural network (CNN) is of great interest in machine learning and has demonstrated excellent performance in hyperspectral image classification. In this paper, we propose a classification framework, called diverse region-based CNN, which can encode semantic context-aware representation to obtain promising features. With merging a diverse set of discriminative appearance factors, the resulting CNN-based representation exhibits spatial-spectral context sensitivity that is essential for accurate pixel classification. The proposed method exploiting diverse region-based inputs to learn contextual interactional features is expected to have more discriminative power. The joint representation containing rich spectral and spatial information is then fed to a fully connected network and the label of each pixel vector is predicted by a softmax layer. Experimental results with widely used hyperspectral image data sets demonstrate that the proposed method can surpass any other conventional deep learning-based classifiers and other state-of-the-art classifiers.

  3. Estimation of trabecular bone parameters in children from multisequence MRI using texture-based regression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lekadir, Karim, E-mail: karim.lekadir@upf.edu; Hoogendoorn, Corné; Armitage, Paul

    Purpose: This paper presents a statistical approach for the prediction of trabecular bone parameters from low-resolution multisequence magnetic resonance imaging (MRI) in children, thus addressing the limitations of high-resolution modalities such as HR-pQCT, including the significant exposure of young patients to radiation and the limited applicability of such modalities to peripheral bones in vivo. Methods: A statistical predictive model is constructed from a database of MRI and HR-pQCT datasets, to relate the low-resolution MRI appearance in the cancellous bone to the trabecular parameters extracted from the high-resolution images. The description of the MRI appearance is achieved between subjects by usingmore » a collection of feature descriptors, which describe the texture properties inside the cancellous bone, and which are invariant to the geometry and size of the trabecular areas. The predictive model is built by fitting to the training data a nonlinear partial least square regression between the input MRI features and the output trabecular parameters. Results: Detailed validation based on a sample of 96 datasets shows correlations >0.7 between the trabecular parameters predicted from low-resolution multisequence MRI based on the proposed statistical model and the values extracted from high-resolution HRp-QCT. Conclusions: The obtained results indicate the promise of the proposed predictive technique for the estimation of trabecular parameters in children from multisequence MRI, thus reducing the need for high-resolution radiation-based scans for a fragile population that is under development and growth.« less

  4. Building predictive in vitro pulmonary toxicity assays using high-throughput imaging and artificial intelligence.

    PubMed

    Lee, Jia-Ying Joey; Miller, James Alastair; Basu, Sreetama; Kee, Ting-Zhen Vanessa; Loo, Lit-Hsin

    2018-06-01

    Human lungs are susceptible to the toxicity induced by soluble xenobiotics. However, the direct cellular effects of many pulmonotoxic chemicals are not always clear, and thus, a general in vitro assay for testing pulmonotoxicity applicable to a wide variety of chemicals is not currently available. Here, we report a study that uses high-throughput imaging and artificial intelligence to build an in vitro pulmonotoxicity assay by automatically comparing and selecting human lung-cell lines and their associated quantitative phenotypic features most predictive of in vivo pulmonotoxicity. This approach is called "High-throughput In vitro Phenotypic Profiling for Toxicity Prediction" (HIPPTox). We found that the resulting assay based on two phenotypic features of a human bronchial epithelial cell line, BEAS-2B, can accurately classify 33 reference chemicals with human pulmonotoxicity information (88.8% balance accuracy, 84.6% sensitivity, and 93.0% specificity). In comparison, the predictivity of a standard cell-viability assay on the same set of chemicals is much lower (77.1% balanced accuracy, 84.6% sensitivity, and 69.5% specificity). We also used the assay to evaluate 17 additional test chemicals with unknown/unclear human pulmonotoxicity, and experimentally confirmed that many of the pulmonotoxic reference and predicted-positive test chemicals induce DNA strand breaks and/or activation of the DNA-damage response (DDR) pathway. Therefore, HIPPTox helps us to uncover these common modes-of-action of pulmonotoxic chemicals. HIPPTox may also be applied to other cell types or models, and accelerate the development of predictive in vitro assays for other cell-type- or organ-specific toxicities.

  5. Early prediction of lung cancer recurrence after stereotactic radiotherapy using second order texture statistics

    NASA Astrophysics Data System (ADS)

    Mattonen, Sarah A.; Palma, David A.; Haasbeek, Cornelis J. A.; Senan, Suresh; Ward, Aaron D.

    2014-03-01

    Benign radiation-induced lung injury is a common finding following stereotactic ablative radiotherapy (SABR) for lung cancer, and is often difficult to differentiate from a recurring tumour due to the ablative doses and highly conformal treatment with SABR. Current approaches to treatment response assessment have shown limited ability to predict recurrence within 6 months of treatment. The purpose of our study was to evaluate the accuracy of second order texture statistics for prediction of eventual recurrence based on computed tomography (CT) images acquired within 6 months of treatment, and compare with the performance of first order appearance and lesion size measures. Consolidative and ground-glass opacity (GGO) regions were manually delineated on post-SABR CT images. Automatic consolidation expansion was also investigated to act as a surrogate for GGO position. The top features for prediction of recurrence were all texture features within the GGO and included energy, entropy, correlation, inertia, and first order texture (standard deviation of density). These predicted recurrence with 2-fold cross validation (CV) accuracies of 70-77% at 2- 5 months post-SABR, with energy, entropy, and first order texture having leave-one-out CV accuracies greater than 80%. Our results also suggest that automatic expansion of the consolidation region could eliminate the need for manual delineation, and produced reproducible results when compared to manually delineated GGO. If validated on a larger data set, this could lead to a clinically useful computer-aided diagnosis system for prediction of recurrence within 6 months of SABR and allow for early salvage therapy for patients with recurrence.

  6. Classification of Alzheimer's disease and prediction of mild cognitive impairment-to-Alzheimer's conversion from structural magnetic resource imaging using feature ranking and a genetic algorithm.

    PubMed

    Beheshti, Iman; Demirel, Hasan; Matsuda, Hiroshi

    2017-04-01

    We developed a novel computer-aided diagnosis (CAD) system that uses feature-ranking and a genetic algorithm to analyze structural magnetic resonance imaging data; using this system, we can predict conversion of mild cognitive impairment (MCI)-to-Alzheimer's disease (AD) at between one and three years before clinical diagnosis. The CAD system was developed in four stages. First, we used a voxel-based morphometry technique to investigate global and local gray matter (GM) atrophy in an AD group compared with healthy controls (HCs). Regions with significant GM volume reduction were segmented as volumes of interest (VOIs). Second, these VOIs were used to extract voxel values from the respective atrophy regions in AD, HC, stable MCI (sMCI) and progressive MCI (pMCI) patient groups. The voxel values were then extracted into a feature vector. Third, at the feature-selection stage, all features were ranked according to their respective t-test scores and a genetic algorithm designed to find the optimal feature subset. The Fisher criterion was used as part of the objective function in the genetic algorithm. Finally, the classification was carried out using a support vector machine (SVM) with 10-fold cross validation. We evaluated the proposed automatic CAD system by applying it to baseline values from the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset (160 AD, 162 HC, 65 sMCI and 71 pMCI subjects). The experimental results indicated that the proposed system is capable of distinguishing between sMCI and pMCI patients, and would be appropriate for practical use in a clinical setting. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Predicting pathologic tumor response to chemoradiotherapy with histogram distances characterizing longitudinal changes in 18F-FDG uptake patterns

    PubMed Central

    Tan, Shan; Zhang, Hao; Zhang, Yongxue; Chen, Wengen; D’Souza, Warren D.; Lu, Wei

    2013-01-01

    Purpose: A family of fluorine-18 (18F)-fluorodeoxyglucose (18F-FDG) positron-emission tomography (PET) features based on histogram distances is proposed for predicting pathologic tumor response to neoadjuvant chemoradiotherapy (CRT). These features describe the longitudinal change of FDG uptake distribution within a tumor. Methods: Twenty patients with esophageal cancer treated with CRT plus surgery were included in this study. All patients underwent PET/CT scans before (pre-) and after (post-) CRT. The two scans were first rigidly registered, and the original tumor sites were then manually delineated on the pre-PET/CT by an experienced nuclear medicine physician. Two histograms representing the FDG uptake distribution were extracted from the pre- and the registered post-PET images, respectively, both within the delineated tumor. Distances between the two histograms quantify longitudinal changes in FDG uptake distribution resulting from CRT, and thus are potential predictors of tumor response. A total of 19 histogram distances were examined and compared to both traditional PET response measures and Haralick texture features. Receiver operating characteristic analyses and Mann-Whitney U test were performed to assess their predictive ability. Results: Among all tested histogram distances, seven bin-to-bin and seven crossbin distances outperformed traditional PET response measures using maximum standardized uptake value (AUC = 0.70) or total lesion glycolysis (AUC = 0.80). The seven bin-to-bin distances were: L2 distance (AUC = 0.84), χ2 distance (AUC = 0.83), intersection distance (AUC = 0.82), cosine distance (AUC = 0.83), squared Euclidean distance (AUC = 0.83), L1 distance (AUC = 0.82), and Jeffrey distance (AUC = 0.82). The seven crossbin distances were: quadratic-chi distance (AUC = 0.89), earth mover distance (AUC = 0.86), fast earth mover distance (AUC = 0.86), diffusion distance (AUC = 0.88), Kolmogorov-Smirnov distance (AUC = 0.88), quadratic form distance (AUC = 0.87), and match distance (AUC = 0.84). These crossbin histogram distance features showed slightly higher prediction accuracy than texture features on post-PET images. Conclusions: The results suggest that longitudinal patterns in 18F-FDG uptake characterized using histogram distances provide useful information for predicting the pathologic response of esophageal cancer to CRT. PMID:24089897

  8. Radiomics in Oncological PET/CT: Clinical Applications.

    PubMed

    Lee, Jeong Won; Lee, Sang Mi

    2018-06-01

    18 F-fluorodeoxyglucose (FDG) positron emission tomography/computed tomography (PET/CT) is widely used for staging, evaluating treatment response, and predicting prognosis in malignant diseases. FDG uptake and volumetric PET parameters such as metabolic tumor volume have been used and are still used as conventional PET parameters to assess biological characteristics of tumors. However, in recent years, additional features derived from PET images by computational processing have been found to reflect intratumoral heterogeneity, which is related to biological tumor features, and to provide additional predictive and prognostic information, which leads to the concept of radiomics. In this review, we focus on recent clinical studies of malignant diseases that investigated intratumoral heterogeneity on PET/CT, and we discuss its clinical role in various cancers.

  9. Deriving stable multi-parametric MRI radiomic signatures in the presence of inter-scanner variations: survival prediction of glioblastoma via imaging pattern analysis and machine learning techniques

    NASA Astrophysics Data System (ADS)

    Rathore, Saima; Bakas, Spyridon; Akbari, Hamed; Shukla, Gaurav; Rozycki, Martin; Davatzikos, Christos

    2018-02-01

    There is mounting evidence that assessment of multi-parametric magnetic resonance imaging (mpMRI) profiles can noninvasively predict survival in many cancers, including glioblastoma. The clinical adoption of mpMRI as a prognostic biomarker, however, depends on its applicability in a multicenter setting, which is hampered by inter-scanner variations. This concept has not been addressed in existing studies. We developed a comprehensive set of within-patient normalized tumor features such as intensity profile, shape, volume, and tumor location, extracted from multicenter mpMRI of two large (npatients=353) cohorts, comprising the Hospital of the University of Pennsylvania (HUP, npatients=252, nscanners=3) and The Cancer Imaging Archive (TCIA, npatients=101, nscanners=8). Inter-scanner harmonization was conducted by normalizing the tumor intensity profile, with that of the contralateral healthy tissue. The extracted features were integrated by support vector machines to derive survival predictors. The predictors' generalizability was evaluated within each cohort, by two cross-validation configurations: i) pooled/scanner-agnostic, and ii) across scanners (training in multiple scanners and testing in one). The median survival in each configuration was used as a cut-off to divide patients in long- and short-survivors. Accuracy (ACC) for predicting long- versus short-survivors, for these configurations was ACCpooled=79.06% and ACCpooled=84.7%, ACCacross=73.55% and ACCacross=74.76%, in HUP and TCIA datasets, respectively. The hazard ratio at 95% confidence interval was 3.87 (2.87-5.20, P<0.001) and 6.65 (3.57-12.36, P<0.001) for HUP and TCIA datasets, respectively. Our findings suggest that adequate data normalization coupled with machine learning classification allows robust prediction of survival estimates on mpMRI acquired by multiple scanners.

  10. 4D-CT motion estimation using deformable image registration and 5D respiratory motion modeling.

    PubMed

    Yang, Deshan; Lu, Wei; Low, Daniel A; Deasy, Joseph O; Hope, Andrew J; El Naqa, Issam

    2008-10-01

    Four-dimensional computed tomography (4D-CT) imaging technology has been developed for radiation therapy to provide tumor and organ images at the different breathing phases. In this work, a procedure is proposed for estimating and modeling the respiratory motion field from acquired 4D-CT imaging data and predicting tissue motion at the different breathing phases. The 4D-CT image data consist of series of multislice CT volume segments acquired in ciné mode. A modified optical flow deformable image registration algorithm is used to compute the image motion from the CT segments to a common full volume 3D-CT reference. This reference volume is reconstructed using the acquired 4D-CT data at the end-of-exhalation phase. The segments are optimally aligned to the reference volume according to a proposed a priori alignment procedure. The registration is applied using a multigrid approach and a feature-preserving image downsampling maxfilter to achieve better computational speed and higher registration accuracy. The registration accuracy is about 1.1 +/- 0.8 mm for the lung region according to our verification using manually selected landmarks and artificially deformed CT volumes. The estimated motion fields are fitted to two 5D (spatial 3D+tidal volume+airflow rate) motion models: forward model and inverse model. The forward model predicts tissue movements and the inverse model predicts CT density changes as a function of tidal volume and airflow rate. A leave-one-out procedure is used to validate these motion models. The estimated modeling prediction errors are about 0.3 mm for the forward model and 0.4 mm for the inverse model.

  11. Predicting Treatment Response to Intra-arterial Therapies for Hepatocellular Carcinoma with the Use of Supervised Machine Learning-An Artificial Intelligence Concept.

    PubMed

    Abajian, Aaron; Murali, Nikitha; Savic, Lynn Jeanette; Laage-Gaupp, Fabian Max; Nezami, Nariman; Duncan, James S; Schlachter, Todd; Lin, MingDe; Geschwind, Jean-François; Chapiro, Julius

    2018-06-01

    To use magnetic resonance (MR) imaging and clinical patient data to create an artificial intelligence (AI) framework for the prediction of therapeutic outcomes of transarterial chemoembolization by applying machine learning (ML) techniques. This study included 36 patients with hepatocellular carcinoma (HCC) treated with transarterial chemoembolization. The cohort (age 62 ± 8.9 years; 31 men; 13 white; 24 Eastern Cooperative Oncology Group performance status 0, 10 status 1, 2 status 2; 31 Child-Pugh stage A, 4 stage B, 1 stage C; 1 Barcelona Clinic Liver Cancer stage 0, 12 stage A, 10 stage B, 13 stage C; tumor size 5.2 ± 3.0 cm; number of tumors 2.6 ± 1.1; and 30 conventional transarterial chemoembolization, 6 with drug-eluting embolic agents). MR imaging was obtained before and 1 month after transarterial chemoembolization. Image-based tumor response to transarterial chemoembolization was assessed with the use of the 3D quantitative European Association for the Study of the Liver (qEASL) criterion. Clinical information, baseline imaging, and therapeutic features were used to train logistic regression (LR) and random forest (RF) models to predict patients as treatment responders or nonresponders under the qEASL response criterion. The performance of each model was validated using leave-one-out cross-validation. Both LR and RF models predicted transarterial chemoembolization treatment response with an overall accuracy of 78% (sensitivity 62.5%, specificity 82.1%, positive predictive value 50.0%, negative predictive value 88.5%). The strongest predictors of treatment response included a clinical variable (presence of cirrhosis) and an imaging variable (relative tumor signal intensity >27.0). Transarterial chemoembolization outcomes in patients with HCC may be predicted before procedures by combining clinical patient data and baseline MR imaging with the use of AI and ML techniques. Copyright © 2018 SIR. Published by Elsevier Inc. All rights reserved.

  12. Personal recognition using hand shape and texture.

    PubMed

    Kumar, Ajay; Zhang, David

    2006-08-01

    This paper proposes a new bimodal biometric system using feature-level fusion of hand shape and palm texture. The proposed combination is of significance since both the palmprint and hand-shape images are proposed to be extracted from the single hand image acquired from a digital camera. Several new hand-shape features that can be used to represent the hand shape and improve the performance are investigated. The new approach for palmprint recognition using discrete cosine transform coefficients, which can be directly obtained from the camera hardware, is demonstrated. None of the prior work on hand-shape or palmprint recognition has given any attention on the critical issue of feature selection. Our experimental results demonstrate that while majority of palmprint or hand-shape features are useful in predicting the subjects identity, only a small subset of these features are necessary in practice for building an accurate model for identification. The comparison and combination of proposed features is evaluated on the diverse classification schemes; naive Bayes (normal, estimated, multinomial), decision trees (C4.5, LMT), k-NN, SVM, and FFN. Although more work remains to be done, our results to date indicate that the combination of selected hand-shape and palmprint features constitutes a promising addition to the biometrics-based personal recognition systems.

  13. Geometry-based pressure drop prediction in mildly diseased human coronary arteries.

    PubMed

    Schrauwen, J T C; Wentzel, J J; van der Steen, A F W; Gijsen, F J H

    2014-06-03

    Pressure drop (△p) estimations in human coronary arteries have several important applications, including determination of appropriate boundary conditions for CFD and estimation of fractional flow reserve (FFR). In this study a △p prediction was made based on geometrical features derived from patient-specific imaging data. Twenty-two mildly diseased human coronary arteries were imaged with computed tomography and intravascular ultrasound. Each artery was modelled in three consecutive steps: from straight to tapered, to stenosed, to curved model. CFD was performed to compute the additional △p in each model under steady flow for a wide range of Reynolds numbers. The correlations between the added geometrical complexity and additional △p were used to compute a predicted △p. This predicted △p based on geometry was compared to CFD results. The mean △p calculated with CFD was 855±666Pa. Tapering and curvature added significantly to the total △p, accounting for 31.4±19.0% and 18.0±10.9% respectively at Re=250. Using tapering angle, maximum area stenosis and angularity of the centerline, we were able to generate a good estimate for the predicted △p with a low mean but high standard deviation: average error of 41.1±287.8Pa at Re=250. Furthermore, the predicted △p was used to accurately estimate FFR (r=0.93). The effect of the geometric features was determined and the pressure drop in mildly diseased human coronary arteries was predicted quickly based solely on geometry. This pressure drop estimation could serve as a boundary condition in CFD to model the impact of distal epicardial vessels. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Texture analysis with statistical methods for wheat ear extraction

    NASA Astrophysics Data System (ADS)

    Bakhouche, M.; Cointault, F.; Gouton, P.

    2007-01-01

    In agronomic domain, the simplification of crop counting, necessary for yield prediction and agronomic studies, is an important project for technical institutes such as Arvalis. Although the main objective of our global project is to conceive a mobile robot for natural image acquisition directly in a field, Arvalis has proposed us first to detect by image processing the number of wheat ears in images before to count them, which will allow to obtain the first component of the yield. In this paper we compare different texture image segmentation techniques based on feature extraction by first and higher order statistical methods which have been applied on our images. The extracted features are used for unsupervised pixel classification to obtain the different classes in the image. So, the K-means algorithm is implemented before the choice of a threshold to highlight the ears. Three methods have been tested in this feasibility study with very average error of 6%. Although the evaluation of the quality of the detection is visually done, automatic evaluation algorithms are currently implementing. Moreover, other statistical methods of higher order will be implemented in the future jointly with methods based on spatio-frequential transforms and specific filtering.

  15. Microvessel prediction in H&E Stained Pathology Images using fully convolutional neural networks.

    PubMed

    Yi, Faliu; Yang, Lin; Wang, Shidan; Guo, Lei; Huang, Chenglong; Xie, Yang; Xiao, Guanghua

    2018-02-27

    Pathological angiogenesis has been identified in many malignancies as a potential prognostic factor and target for therapy. In most cases, angiogenic analysis is based on the measurement of microvessel density (MVD) detected by immunostaining of CD31 or CD34. However, most retrievable public data is generally composed of Hematoxylin and Eosin (H&E)-stained pathology images, for which is difficult to get the corresponding immunohistochemistry images. The role of microvessels in H&E stained images has not been widely studied due to their complexity and heterogeneity. Furthermore, identifying microvessels manually for study is a labor-intensive task for pathologists, with high inter- and intra-observer variation. Therefore, it is important to develop automated microvessel-detection algorithms in H&E stained pathology images for clinical association analysis. In this paper, we propose a microvessel prediction method using fully convolutional neural networks. The feasibility of our proposed algorithm is demonstrated through experimental results on H&E stained images. Furthermore, the identified microvessel features were significantly associated with the patient clinical outcomes. This is the first study to develop an algorithm for automated microvessel detection in H&E stained pathology images.

  16. Automatic recognition of severity level for diagnosis of diabetic retinopathy using deep visual features.

    PubMed

    Abbas, Qaisar; Fondon, Irene; Sarmiento, Auxiliadora; Jiménez, Soledad; Alemany, Pedro

    2017-11-01

    Diabetic retinopathy (DR) is leading cause of blindness among diabetic patients. Recognition of severity level is required by ophthalmologists to early detect and diagnose the DR. However, it is a challenging task for both medical experts and computer-aided diagnosis systems due to requiring extensive domain expert knowledge. In this article, a novel automatic recognition system for the five severity level of diabetic retinopathy (SLDR) is developed without performing any pre- and post-processing steps on retinal fundus images through learning of deep visual features (DVFs). These DVF features are extracted from each image by using color dense in scale-invariant and gradient location-orientation histogram techniques. To learn these DVF features, a semi-supervised multilayer deep-learning algorithm is utilized along with a new compressed layer and fine-tuning steps. This SLDR system was evaluated and compared with state-of-the-art techniques using the measures of sensitivity (SE), specificity (SP) and area under the receiving operating curves (AUC). On 750 fundus images (150 per category), the SE of 92.18%, SP of 94.50% and AUC of 0.924 values were obtained on average. These results demonstrate that the SLDR system is appropriate for early detection of DR and provide an effective treatment for prediction type of diabetes.

  17. Preoperative Computerized Tomography and Magnetic Resonance Imaging of the Pancreas Predicts Pancreatic Mass and Functional Outcomes After Total Pancreatectomy and Islet Autotransplant

    PubMed Central

    Young, Michael C.; Theis, Jake R.; Hodges, James S.; Dunn, Ty B.; Pruett, Timothy L.; Chinnakotla, Srinath; Walker, Sidney P.; Freeman, Martin L.; Trikudanathan, Guru; Arain, Mustafa; Robertson, R. Paul; Wilhelm, Joshua J.; Schwarzenberg, Sarah J.; Bland, Barbara; Beilman, Gregory J.; Bellin, Melena D.

    2015-01-01

    Objectives About two-thirds of patients will remain on insulin therapy after total pancreatectomy with islet autotransplant (TPIAT) for chronic pancreatitis. We investigated the relationship between measured pancreas volume on computerized tomography (CT) or magnetic resonance imaging (MRI), and features of chronic pancreatiits on imaging, with subsequent islet isolation and diabetes outcomes. Methods CT or MRI was reviewed for pancreas volume (Vitrea software), and presence or absence of calcifications, atrophy, and dilated pancreatic duct in 97 patients undergoing TPIAT. Relationship between these features and: (1) islet mass isolated and (2) diabetes status at 1 year post-TPAIT were evaluated. Results Pancreas volume correlated with islet mass measured as total islet equivalents (r=0.50, p<0.0001). Mean islet equivalents was reduced by more than half if any one of calcifications, atrophy, or ductal dilatation were observed. Pancreatic calcifications increased the odds of insulin dependence 4.0 fold (1.1, 15). Collectively, the pancreas volume and 3 imaging features strongly associated with 1 year insulin use (p=0.07), islet graft failure (p=0.003), Hemoglobin A1c (p=0.0004), fasting glucose (p=0.027), and fasting C-peptide level (p=0.008). Conclusions Measures of pancreatic parenchymal destruction on imaging, including smaller pancreas volume and calcifications associate strongly with impaired islet mass and 1 year diabetes outcomes. PMID:26745861

  18. P09.62 Towards individualized survival prediction in glioblastoma patients using machine learning methods

    PubMed Central

    Vera, L.; Pérez-Beteta, J.; Molina, D.; Borrás, J. M.; Benavides, M.; Barcia, J. A.; Velásquez, C.; Albillo, D.; Lara, P.; Pérez-García, V. M.

    2017-01-01

    Abstract Introduction: Machine learning methods are integrated in clinical research studies due to their strong capability to discover parameters having a high information content and their predictive combined potential. Several studies have been developed using glioblastoma patient’s imaging data. Many of them have focused on including large numbers of variables, mostly two-dimensional textural features and/or genomic data, regardless of their meaning or potential clinical relevance. Materials and methods: 193 glioblastoma patients were included in the study. Preoperative 3D magnetic resonance images were collected and semi-automatically segmented using an in-house software. After segmentation, a database of 90 parameters including geometrical and textural image-based measures together with patients’ clinical data (including age, survival, type of treatment, etc.) was constructed. The criterion for including variables in the study was that they had either shown individual impact on survival in single or multivariate analyses or have a precise clinical or geometrical meaning. These variables were used to perform several machine learning experiments. In a first set of computational cross-validation experiments based on regression trees, those attributes showing the highest information measures were extracted. In the second phase, more sophisticated learning methods were employed in order to validate the potential of the previous variables predicting survival. Concretely support vector machines, neural networks and sparse grid methods were used. Results: Variables showing high information measure in the first phase provided the best prediction results in the second phase. Specifically, patient age, Stupp regimen and a geometrical measure related with the irregularity of contrast-enhancing areas were the variables showing the highest information measure in the first stage. For the second phase, the combinations of patient age and Stupp regimen together with one tumor geometrical measure and one tumor heterogeneity feature reached the best quality prediction. Conclusions: Advanced machine learning methods identified the parameters with the highest information measure and survival predictive potential. The uninformed machine learning methods identified a novel feature measure with direct impact on survival. Used in combination with other previously known variables multi-indexes can be defined that can help in tumor characterization and prognosis prediction. Recent advances on the definition of those multi-indexes will be reported in the conference. Funding: James S. Mc. Donnell Foundation (USA) 21st Century Science Initiative in Mathematical and Complex Systems Approaches for Brain Cancer [Collaborative award 220020450 and planning grant 220020420], MINECO/FEDER [MTM2015-71200-R], JCCM [PEII-2014-031-P].

  19. An initial investigation on developing a new method to predict short-term breast cancer risk based on deep learning technology

    NASA Astrophysics Data System (ADS)

    Qiu, Yuchen; Wang, Yunzhi; Yan, Shiju; Tan, Maxine; Cheng, Samuel; Liu, Hong; Zheng, Bin

    2016-03-01

    In order to establish a new personalized breast cancer screening paradigm, it is critically important to accurately predict the short-term risk of a woman having image-detectable cancer after a negative mammographic screening. In this study, we developed and tested a novel short-term risk assessment model based on deep learning method. During the experiment, a number of 270 "prior" negative screening cases was assembled. In the next sequential ("current") screening mammography, 135 cases were positive and 135 cases remained negative. These cases were randomly divided into a training set with 200 cases and a testing set with 70 cases. A deep learning based computer-aided diagnosis (CAD) scheme was then developed for the risk assessment, which consists of two modules: adaptive feature identification module and risk prediction module. The adaptive feature identification module is composed of three pairs of convolution-max-pooling layers, which contains 20, 10, and 5 feature maps respectively. The risk prediction module is implemented by a multiple layer perception (MLP) classifier, which produces a risk score to predict the likelihood of the woman developing short-term mammography-detectable cancer. The result shows that the new CAD-based risk model yielded a positive predictive value of 69.2% and a negative predictive value of 74.2%, with a total prediction accuracy of 71.4%. This study demonstrated that applying a new deep learning technology may have significant potential to develop a new short-term risk predicting scheme with improved performance in detecting early abnormal symptom from the negative mammograms.

  20. An Improved Method of AGM for High Precision Geolocation of SAR Images

    NASA Astrophysics Data System (ADS)

    Zhou, G.; He, C.; Yue, T.; Huang, W.; Huang, Y.; Li, X.; Chen, Y.

    2018-05-01

    In order to take full advantage of SAR images, it is necessary to obtain the high precision location of the image. During the geometric correction process of images, to ensure the accuracy of image geometric correction and extract the effective mapping information from the images, precise image geolocation is important. This paper presents an improved analytical geolocation method (IAGM) that determine the high precision geolocation of each pixel in a digital SAR image. This method is based on analytical geolocation method (AGM) proposed by X. K. Yuan aiming at realizing the solution of RD model. Tests will be conducted using RADARSAT-2 SAR image. Comparing the predicted feature geolocation with the position as determined by high precision orthophoto, results indicate an accuracy of 50m is attainable with this method. Error sources will be analyzed and some recommendations about improving image location accuracy in future spaceborne SAR's will be given.

Top